Link to home
Start Free TrialLog in
Avatar of Joseph Moody
Joseph MoodyFlag for United States of America

asked on

Live Migrations are Slow on 10GB Network

When doing live migrations, transfers do not break 1GB even though we have a 10GB connection from host to host. The transfer mostly hang around the 600 - 900 Mbps range. A normal VM takes 10 minutes or so to move. In the very last second or two of a migration, task manger will show a transfer speed around 9GB.

What can I do to get that kind of speed the whole time? Here is my setup and what I have tried.

(2) Windows 2012 R2 servers running only the Hyper-V roll.
Intel S2600GZ board.
(2) 8 core 2.6 GHz Xeon E5-2650 processors.
96 GB of memory.
(2) Intel X540-T2 10GB NICs. (4 10GB ports per server)

I've enabled jumbo packets and can successfully ping with an 8000 byte unfragmented packet from host to host.
I've disabled all C states in BIOS
These servers have a four port 1GB NIC in them. I've disabled that in BIOS.
I've checked the CPU during transfer - no core maxes out at all (doesn't bring 10% usage).
I've tried a variety of live migrations setups (with compression and without). No difference that I can see.
I've tried a direct connection between these two host with no switch in the center. No difference.
I've teamed the NICs in Windows - seemed to add about 300 MBs to the transfers (that is why I have the 600-900 range)
I've configured the NICs to use the Hyper-V profile in the Intel management software.
I've updated the firmware on the servers to the latest available.

Finally, I let SCVMM manage the NICs (knowing that I shouldn't expect more than 3.5 Gbps due to VMQ). The live migration traffic then gets spilt between three vNICs (live migration, host management, client access). I found this behavior very odd because I have the IP range used on the Live Migration network as the only specified range to use under Migration Settings.

So where should I start looking now?
SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Joseph Moody

ASKER

Thank you for the reply Philip. Great EE article and blog! I will be digging through both resources.

Three things to update this question with:

BIOS has high performance enabled for the CPU.
Cables are all cat6a - NICs all show 10gb connections.
Real time protection has the correct Hyper-V/SCVMM exclusions.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thank you both for your help! The network was not my bottle neck - it was with my disks.
That is what I suspected. Sorry about that.

MO