Hyper-V How Many Switches
Posted on 2012-09-16
We are building up a new 3 Node Hyper-V 2008 R2 cluster. We have an existing cluster that we had experienced some odd behavior with from time to time and that problem was quite elusive. After some research we came to a soft conclusion that it may have had something to do with our Live Migration VLAN residing on the same switch as our heartbeat and CSV VLANS. That cluster is now pretty much aged out so we will be decommissioning it rather than trouble-shooting it further to find the actual cause. I won't go into all the details of the issue we had with that cluster, but wanted to provide that background as part of the reason why we are asking the next question.
Our New Cluster:
3 Nodes each with 8 NICS. Windows 2008 R2 SP1. We will be using iSCSI SAN storage.
NICS will be assigned as follows:
1 NIC VM Production NIC (Our Production VLANS)
1 NIC VM Production NIC (Our Engineering VLANS)
2 iSCSI MPIO
1 Cluster Shared Volumes (CSV)
1 Heartbeat/Cluster Communications (HB)
1 Management (MGMT)
1 Live Migration (LM)
The iSCSI NICS will be in to a dedicated private switch stack. This stack does not handle anything but the iSCSI VLAN. There is no other traffic of any sort on that stack (Dell Power Connect 62xx, with true stacking modules)
2 NICS will be used for two different virtual switches which are connected to VLAN Trunk ports on our production fabric. These actually go back to our core switch which is a dual supervisor Cisco 6509.
Our remaining switches are all stand alone (non-stacked) Dell PowerConnect 54xx, each of which VLAN trunk back to the Cisco 6509. These 54xx's don't have true stacking. It is these switches we will use for the remaining Hyper-V networks as described below.
Our/my thinking is that we will put MGMT, HB and CSV into the same physical switches but each on a different VLAN and then put LM on its own dedicated switch. Of course having MGMT, HB and CSV all on the same switch is scarey to me.
We do have an abundance of switches that all feed to our core switch, so we can place each nodes networks on a different switch, even in different cabinets if we wanted so as to not have a single switch point of failure. Seems like overkill, but I am paranoid about single point of failure.
I'm looking for any input on this as I'm the sole engineer working this project. All of the references I am finding through my research all talk about separating traffic via VLAN but no many if any discuss the physical switch arrangement/requirements other that 1gbit of course.
Opinions and recommendations greatly appreciated.