Link to home
Start Free TrialLog in
Avatar of jmateknik
jmateknikFlag for Denmark

asked on

Teaming in 2012R2 2-node Hyper-V Cluster w. 6 x 1 gb NICs (FC based storage)

Our solution is a Fibre-based one, so ethernet is not for storage but for Management, Cluster, LiveMigration and VM-traffic solely.

Our failover-cluster consist of two hosts with 256 gb of ram each, that utilize only the half under normal service, but they can carry all VMs if needed. This means, that around 100 gb ram needs to failover and even with Compression in 2012 R2 that can take som time when NICs and switches only run 1 gbit.

Teaming: Switch-dependant vs Switch-Independant
Recommendations from MS are switch-independant because that gives more queues for the virtual machines (VMQs) and less complexity in the infrastructure. But the backside is, that inbound traffic cannot be more than that of one of the team-members.

1.

But how much does VMQ matter, when we dont have 10 gbit adapters and when the VM-traffic it self is rather limited?

2.

What is your recommended teaming/converged Network solution in this scenario?
NB: I did read a lot about Converged Network up front. I.e. the Hyper-V 2012 R2 Network Architectures Series.
ASKER CERTIFIED SOLUTION
Avatar of Cliff Galiher
Cliff Galiher
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of jmateknik

ASKER

Thank you Cliff!

VMQ is turned off internally (even if it says enabled) at 1GB speeds. So you will get zero benefit from VMQ.
I am really happy that I can simply ignore that consideration..

Live migration traffic is *not* limited tot he speed of a single NIC in 2012, even in switch independent teams
I have a vSwitch on top of the Switch Independent Team with 3 vEthernet adapters setup for Management, Cluster and LiveMigration (QoS is setup for bandwith control among these). I also have configured the LiveMigration to use the interface of the vEthernet(LiveMigration) adapter. What you say basically means that the vSwitch,  the Team and the interface is bypassed. Could you elaborate or do you have any links that shed light on the matter?

All the best!
I am not saying that it is bypassed at all. If you set QoS at any level, it will be honored at that level. I *am* saying that multichannel support is deeply integrated at all levels. A luv migration will say "I have 10Gb/s of data to send" ti the vEtherbet adapter. The adapter checks its QoS, and as long as it doesn't need to throttle, passes it on to the switch. The switch applies its logic and passes it on to the team. The team (in switch independent mode) decides if it needs to only send traffic to one NIC to honor switch independence OR if the receiving machine supports SMB multichannel, it can pass traffic to all NICs in the team (which would still honor switch independence because SMB Multichannel is handling the job of making sure the NICs dont loop back on themselves) and then tells the upper stacks "I can take 1Gb/s or "6Gb/s" or whatever.

That is a gross oversimplification. But the point is the switch was not bypassed. Nor was the virtual adapter. Nor the team. But because of deep integration of Smb multichannel, it kicks on as close to hardware as possible for application loads that support it, so you get its benefit even as the rest of the networking stack still does its job.
After writing my post and then reading it again - I felt that it revealed *my* superficial understanding of the matter (learning potential) . Your answer gives me the feeling that you've got a much deeper understanding - and there you have the challenge of explaining this stuff to a novice. I do feel though, that I get a lot of what you try to say, and that the whole "ByPass" approach I introduces doesn't make sense. When I run Get-SmbMultichannelConnection the system indeed sees more than one path to the other host.

I cross-read some posts and stumpled upon three interesting posts from Mr. WorkingHardInIT - before and after the introduction of the Dynamic Load Balancing algorithm. It would seem, that if the team is using the Dynamic algorithm (which theres are very few reasons not to) what you say holds true:

Teamed NIC Live Migrations Between Two Hosts In Windows Server 2012 Do Use All Members

Live Migration over NIC Team in Switch Independent Mode With Dynamic Load Balancing & TCP/IP in Windows Server 2012 R2

Live Migration over NIC Team in Switch Independent Mode With Dynamic Load Balancing & Compression in Windows Server 2012 R2

Final question
LiveMigration networks can be configures both in the Hyper-V Manager console (incoming traffic) *and* in the Failover Clustering console - with this whole discussion in mind - how do I configure these ideally (having 3 "Converged" networks).

Anders
Ideally. That's a heavily biased word. And truthfully, each network is different based on uniqueness of data and how it is used and accessed. Employee habits and such, I could not begin to accurately answer that.
It was not meant like Ideal entire network solution but rather like the recommended best practice way to configure LiveMigration traffic since that can be done both through Hyper-V Manager *and* Failover Cluster Consoler. I.e. should i only use Failover Cluster Manager.
Yes, use the cluster manager, not hyper-v manager.
I reckonize that my second question "What is your recommended teaming/converged Network solution in this scenario?" is a hard question to answer. Answers given helped me a lot - thanks!

Anders