Networking in role Hyper-V R2 and SCVMM with CSV


We have 4 new Dell Poweredge R710 servers with 2 x 6 cores and 48GB RAM and 10x 1GB connections.  

We're planning to add the Hyper-V R2 role in Windows 2008 R2 enterprise.  

I read books and articles but I'm still a little confusing regarding the networking.  

To my knowledge, I have to create a network for the console, another nwtork for iSCSI, assigning several VMNICS for the VM and also a dedicated subnet for the failover cluster.  

But I read that I should create another dedicated subnet for the CSV, is that right ?

If someone can enlighten I would be glad.  In the lab environment I can see the networks in the failover cluster.  But I don't know when the CSV subnet is used it looks like only the cluster subnet is used when Live Migration is moving a VM from a host to another.

I would like to have an example of the network configuration, easy to understand.


Who is Participating?
That was a typo on my part
3 - No, you don't need to install iSCSI initiator on SCVMM.

VDS is needed on both host and VMM, if your VMM server is 2008/2008 R2, then you don't have to do anything.

I think I have confused you by introducing the other post to you.... not the original intent. I need to you understand he pros and cons of a single VM/LUN as compared to multiple VMs/LUN.
Single VM/LUN:
1) No too practical, the number of LUNs increase with the number of VMs. You run out of drive letters soon and would have to opt for mount points to go over 25 VMs/Cluster.
2) With increasing VMs, the SAN management becomes difficult.
3) The perceivable gain for using one VM/LUN is the ability to do SAN migration using SCVMM.
Here I have to point out that SAN migration, used to move a VM from one host to other is the fastest way of migration particularly for the reason the nothing migrates (no copying involved). The LUN unmask off the first node and is masked to the other node. Therefore the UNIT OF SAN MIGRATION IS A LUN, and if you migrate a VM using SAN migration then it makes sence to have one VM per LUN. One beauty of SAN migration is that it is not bound to clusters. (If properly configured) It can be used to migrate LUN between two hosts, host to library and vice versa and even between cluster.
4) provides better performance over multiple VM/LUN configuraion.
5) gives you live migration with a little downtime, during the unmasking/masking of LUN.
6) Not a best practice as far as storage is concerned, you have to create several smaller LUNs with has cost both on storage usage as well as managment. You might end up having more problems then benefit from SAN migrations.

Multiple VMs/LUN: in other word CSVs
1) a much more practical approach and mount point are used to begin with and a managed by cluster management itself.
2) Less hassle in managing SAN this way
3) Supports live migration, fault tolerance for CSV Network, among several other things.
4) a little performance degradation as compared to Single VM/LUN scenario. This might or might not be perceivable by you depending on you case (like the VM workload, number of VMs per CSV and hardware/software RAID used for SAN disks)
5) In the case of CSV, live migration has no downtime involved.
6) Storage management as well as VM management is easier in this approach.

Conclusion: If you opt for Single VM/LUN approach be aware of the issues you might end up facing. You might end up having more problems then benefit from SAN migrations. CSVs a much more practical for normally many scenarios. My recommendations is too use CSVs (wherever possible). If there are VM workloads that justify having a separate VM for themselves ( may that be for the sake of better performance or SAN migration etc) then only those workload should be on separate LUNs.
See what suits you, don't immediately opt fo one approach or the other. weigh you requirements and then decide how many VMs a better candidates for CSVs, how to distribute those VMs in different CSVs and which ones are best suited on separate VMs. ( You can opt of pass-through disks as well if performance is a key factor)

So coming back to your original question
a) the above explanation should answer your question
b) I think you mis read the book, it makes reference of Storage Manager for SAN in the context of Windows Server 2003 R2 (not 2008 R2) for VDS installation. I am not sure if Storage Manager for SAN is available on 2008 R2 server (will check in the morning). Even if it available in some form or the other will it make any difference if you manage LUNs using MDSM or Storage Manager for SAN? if No, then I think it should be personal preference. Use the one you are comfortable with.

Hope that helps dear.
Syed Mutahir AliTechnology ConsultantCommented:

Above link had a similar scenario, topology - have a read through it and I hope that helps
These should help too. I know you have already gone through a lot of articles and docs but trust me, these would definitely help you with CSV and live migration. Not just from design perspective but is configuring them.  (Recommended)  (Recommended)

Videos:   (Recommended)

With regards to example
The above post by mutahir and the following one are good.  (ongoing)

Cloud Class® Course: Python 3 Fundamentals

This course will teach participants about installing and configuring Python, syntax, importing, statements, types, strings, booleans, files, lists, tuples, comprehensions, functions, and classes.

quadrumaneAuthor Commented:
1 - Here is the network (see picture)  According to the best practice this is what I had to do.  But now it looks like CLUSTER, CSV and MIGRATION network are doing the same job.  Don't you think CLUSTER and MIGRATION should be enought ?

I read on technet, good but in the book (Hyper-V R2) it's easier to understand.  On technet there are too many "click on this to see how to" so it's rather difficult to follow.  But it's good though.  The book is just easier to follow.  

2 - the posts you've both  provided are very good.  But I've been told that  NIC Teaming in Hyper-V was not always reliable.   Besides, the Hearthbeat is critical, if the event that a switch becomes unavailable but the server is still running all VMs will start to migrate.  But no one seems to suggest 2x NIC in teaming for the hearthbeat.

3 - I still don't know whether iSCSI adaptors and MPIO drivers and iSCSI initiator, targets and all must be configured on the SCVMM server.  I don't think so because SCVMM is just sending commands to the HYper-V servers which are connected to the iSCSI network.  
1- Well it depends how you configure them. You can live with just CLUSTER and MIGRATION and bind you CSV traffic to CLUSTER but if you have enough NICs why not you one for CSV. Based on your scenario, you CSV traffic might be little minimal or huge.  I think its better to dedicate a NIC to CSV by properly binding and use CLUSTER/heartbeat as Failover since CSV NIC is fault tolerant.

2 - If heartbeat/CLUSTER network somehow fails, it is not necessary that VMs fails to the other node(s). If you have set up the cluster (node majority, node and disk majority or node and share majority),  The majority has to be reached to have the cluster running. If the node is hosting some VMs fails altogether either losing all communiction (like heartbeat, CSV, access to disk in case of disk majority or access to share  in case of share majority) or losing power etc then the VMs would restart on the other Node. Teaming on iSCSI is not recommended and not supported, I am unsure how cluster would behave if heartbeat is teamed).

3 - You only need to have iSCSI initiator on the SCVMM and that also is required if you are using SAN migrations.

Let me know if you need further clarification

quadrumaneAuthor Commented:
1 - I don't see any option to bind traffic to CSV, the only option available is "CLUSTER USE"  Maybe the cluster is aware of CSV but in the cluster networking the only option is "CLUSTER USE" so to me CLUSTER, MIGRATION AND CSV are binded to the same thing.  

I don't understand how CSV NIC is fault tolerant.  

2 - To my understanding, according to what you said, when "CLUSTER USE" is enabled to MANAGEMENT and another NIC such as CLUSTER is also enabled in the event that a NIC goes down the other one will still be up so H-V will not migrate the VMs.  Is that right ?

3 - So the iSCSI initiator must be installed on the SCVMM server ?  If so, it must be configured as it is on the H-V servers to see all SAN targets.  

thanks again

1 - CSV binding option are not available throught Cluster Manager, you use powershell for that purpose. By default, CSV is bound to the NIC with the lowest metric value. If that NIC fails, the NIC with the next lowest metric start acting as the CSV. Therefore, in order to configure the binding of the CSV, you configure metric value of your cluster network through powershell. Use the following resources to confgure you CSV network. (16m:40s Mark) (Under Section: Managing the network used for Cluster Shared Volumes)

Similarly, The NIC or NICs could be assigned for live migration (Under Section: Configure cluster networks for live migration)

2- Yes, when "Allow cluster network communication on this network" selected for any NIC, it can be used for all cluster "network" communication including heartbeat and CSV.

3- No, you need to install iSCSI initiator on SCVMM,
"Each host should also have the latest Microsoft iSCSI Initiator installed in it. The VMM server needs to have Microsoft VDS hardware provider installed."
Only host requires iSCSI initiator installed. This requirements mentioned in the other threads are for SAN migrations using SCVMM. If you are not going to use SAN migration, you can skip the thread.
This can also be useful to you.
quadrumaneAuthor Commented:
1 - I understand

2 - I understand

3 - You said "no, you need to install iSCSI initiator on SCVMM" Did you mean "no you don't have to" ?  You're talking about VDS hardware provider which is included with the Dell MD3000i.  But in the book "Mastering VMM 2008 R2" I can read "If the host being used is Windows 2008, then VDS 2.1 is available in-box, so there is no need to install anything"    But the STORAGE MANAGER FOR SAN" is not enabled by default on Windows 2008 R2.   Dell MDSM has been installed and this is how the host group and host access have been configured.  All LUNs have been created from MDSM too.  

That apart, as far as I understand,  I don't have to install VDS because it's already installed somehow, but it has to be installed on a host, not on the VMM server.  

Which leads me to the most confusing part of the job: creating one LUN for each VM.  I'm getting a little confused as the LUNs have been created from MDSM, not from WIndows "STORAGE MANAGER FOR SAN"   2 things here:

a) From MDSM (the SAN manager provided by Dell) only 2 LUNs have been configured.  So I guess I'll have to destroy thoses LUNs to create as many LUNs as VMs are needed ?

b) I'm no longer sure whether I have to use MDSM or STORAGE MANAGER FOR SAN from Microsoft to create LUNs.  

The good news, I think the answers to that will help me to finally understand :-)

thanks again

quadrumaneAuthor Commented:
Shahid, you should really write a book.  This is by far the best explanation I've ever seen on any topic.  Of course as the speed is not that important, provided that is not too slow, having Live Migration is more important without downtime.  

Thank you very much !

quadrumaneAuthor Commented:
Best solution ever !
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.