Move HyperV cluster heartbeat from one switch to another

We have a 3 host node cluster on HyperV
The networking on each host is set up as follows:
   1.      2 x 2 port 10GB NIC
   2.      1 x quad port 1GB NIC
   3.      4 x onboard 1G NIC
Each host is set up as follows (settings taken from the Networks settings in the Failover Cluster Manager & the individual machine settings):
   A.      All 10GB NICs (#1 above) are used for communication to an Equallogic SAN for cluster shared volumes and ISCI traffic through a dedicated SAN switch (Cluster use is disabled)
   B.      Quad port NIC (#2 above) does not show up in Failover Cluster Manager and is used for VM general traffic (all four in a team to the 48-port main server switch)
   C.      4 onboard NICS (#3 above) are also wired into a 48-port main server switch and set to:
       1.      Host management (Cluster use allowed, hosts / clients are allowed to connect). IP range is subnet
      2.      “Live Migration”: (Cluster use allowed, hosts / clients are not allowed to connect). IP range is subnet
      3.      “Heartbeat”: (Cluster use allowed, hosts / clients are not allowed to connect). IP range is subnet
      4.      “Backup Heartbeat and Live Migration”: (Cluster use allowed, hosts / clients are not allowed to connect). IP range is subnet

Note that the VM traffic and cluster traffic are on the same physical switch (B & C above)

We now want to move the cluster internal traffic (C2, C3 & C4 above) to a new switch so there is no contention with the VM general traffic and to save ports for additional hosts.  i.e. we want to take the nine cables (3 x 3 hosts) and plug them into a new switch dedicated to cluster traffic.

My questions are:

1.      Is there anything that really tells Hyper-V that one network is for heartbeats and one is for live migration traffic?  Or are they just labels and Hyper-V figures it out based on whether the network allows cluster traffic or not?

2.      Do we need to schedule downtime to do this?  
Can we move one network at a time using the following process:
   I.      Take the first cluster network, change the setting so the network does not allow cluster traffic in the Failover Cluster Manager
   II.      Move the cables to the new switch
   III.      Re-enable cluster traffic
   IV.      Repeat for the other two networks

Do we need to stop and start cluster services to do this?  If so, we might as well schedule downtime as that will initiate a mass Live Migration, right?

3.      Is there another way to do this without downtime?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

You have to configure the live migration and heartbeat/CSV networks
for Live migration Use:

For Heartbeat/CSV use:

I have been tested changing networks/subnets in my cluster. The particular network goes offline for a while (2-3 minutes), refreshing the configuration seems to takecare of the issue.

You CSV/heartbeat network is fault-tolerant and if you have multiple networks configured in live migration then it is fault-tolerant too.

I would advise changing one network at a time. test live migtation and heartbeat traffic and then move to the other. Here is a good video by John Savill worth watching.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
PathfinderISUAuthor Commented:
Thanks for the info.  The video URL changed btw but you can Google it.

The Savill video was very helpful to really see where the network traffic is going. One thing I hadn't realized is that only the host that owns the CSV communicates back to the SAN, the rest just communicate to it.

When you say "refresh the configuration" do you just mean right-click on the networks in the FCM and select refresh?  For all three hosts?

Firstly sorry for the link issue.

Secondly, Owner node is known as coordinator node. It handles metadata changes (create, delete, modify etc.) along with read and writes. The other nodes only are able to make read and write operations directly. What you are referring to is known as redirected mode, which only comes into play if the non-coordinator nodes are unable to communicate directly, which happens under certain operations (like taking backup etc). More on this here.

Thirdly, that's exactly what I was referring to when I said refreshing the configuration but through FCM you wouldn't need to do it on all three nodes.

Hope that helps.
Have you ever looked at the actual network utilization on your NICs? In my cluster with some older EqualLogic storage, I rarely hit 10% on a gigabit link. I am willing to bet that you could have the two 10gb links for iSCSI/VM iSCSI communication, and another 2 NICs in a team for the VMs, and then a 2 NIC team for host/cluster/live migration. That saves you 4 switchports per host. I personally use 1 NIC for all host communication, and it is usually shared with VMs. Another two NICs are for host and guest iSCSI traffic. Any additional NICs are dedicated for guests that need to be on special VLANs, such as the DMZ segment.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows Server 2008

From novice to tech pro — start learning today.