Link to home
Start Free TrialLog in
Avatar of andrew_transparent
andrew_transparent

asked on

server 2012 hyper v guest performance

i have a basic hyper-v setup running server 2012 standard as the host (on dell r520 hardware)
its just 1 server with 4 x 600gb sas drives in a RAID10 and 6 broadcom nic's (1 for management and 5 that are teamed)
I've installed 4 guest OS  ( 2 x win2012 and 2 x win2008) basic installs but noticing serious lag finding out that ping times between servers locally are brutal. Access and navigation in these virtual servers are noticeably slow as response times range from 100ms - 150ms+
all the vm's have the nics pointed to the virtual switch of the teamed nic

Here's what I've tried:
- disabled tcp offload on the host
- disabled ipv6 on host and all guest vm's
- checked the hosts file to also comment out the ipv6 address so that only the localhost  127.0.0.1 exists

Anyone know what the issue is?
Avatar of Member_2_4839798
Member_2_4839798

Hey,

This could be disk IO, can you check things like queue length? An easy win is the HDD activity light if you're in front of it.

Cheers

MC
Avatar of andrew_transparent

ASKER

If this info is relevant, i've allocated the vm's at least 2 cpu and 8GB ram

unfortunately i'm no where near the physical server working on it remotely.

However i'm monitoring the disk I/O from the host and the disk activity from the task manager is at 0KB/sec Disk i/o then spikes up to about 30KB/s - 100KB/s then back down to 0 - it doesn't seem like there's any bottleneck there. During that time, i just have a few cmd windows open running continual pings

Pinging from the host itself to other physical devices on the network responds at your typical 1ms
Avatar of Aaron Tomosky
How much ram/CPU is in the box and what is the ram/CPU of a the guests?
the physical host is a Dell server with 2 x Xeon E5-2420 @ 1.90GHz 6core CPU and 64GB of RAM (4 x 16GB DDR3 ECC) with 4 x 600GB SAS 15K drives configured in a RAID10

I've spun up 4 guests
VM1 - server 2012 - 2CPU / 8GB RAM
VM2 - server 2012 - 1CPU / 2GB RAM
VM3 - server 2008 - 2CPU / 8GB RAM
VM4 - server 2008 - 4CPU / 8GB RAM

My intention of running a terminal server, exchange server and a database server but for the moment, the vm's are just baseline installations of  the OS and just member servers to the domain.
No problem there, unless something like all the ram is on one CPU but that would be really odd.
I'd start looking at the teaming. How are they teamed? If you remove the 4 and just use one does the problem go away? What about with two? Etc... Simplify and if the problem is gone slowly add complexity till it breaks is my motto
As for Nic teaming, have a total of 6Nics, 2 onboard and a quad port (all 1GB) configured using the server 2012 native feature
1 onboard nic for management
5 (1 onboard and the quad nic card) into 1 team for the VM's used as the vSwitch for the hyperV

They're all Broadcom but hear horror stories about them... If I knew better, I would have gone with Intel nics but by then this server was already ordered up...

I'll give it a shot with breaking apart the team and seeing how it goes and will keep u posted

Initially I was thinking that perhaps the ipv6 settings may have been taking priority over ipv4 so I've been disabling that in the nics to see if any effect. Also read up on how TCP Offloading should be disabled...
As suggested, I've broken apart the team and have configured the vSwitch to use each of the individual NICs - same result with the high ping times both on the onboard nic and quad port card
Then I began teaming one at a time and testing ping times and access of various items on the network - still the same result
Literally, there's no change whether there's teaming setup or not.

Anyone experience things like this?
Let me know if I have this right (ping times):
Vm to vm - slow
Vm to host - slow
Host to vm - slow
Host to other - fast
Vm to other - ?
here are the average ping response times:
host --> vm 1ms  (1 spike up to about 75ms every other 100 pings or so)
host --> any other device  1ms
other physical device --> host  1ms
vm2 --> vm3  30ms - 90ms
vm2 --> host  40ms - 130ms
vm2 --> vm4  30ms - 130ms
vm3 --> vm4   30ms - 80ms
vm3 --> other physical device  30ms - 160ms
other physical device --> any vm  1ms

The lag seems to be within and from the vm's
ASKER CERTIFIED SOLUTION
Avatar of Aaron Tomosky
Aaron Tomosky
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Perfect!
This did it!
Whats strange is that we have another similar server setup with VMQ enabled and actually not experiencing this however we now know where to look if we notice any lags.
Thanks aarontomosky!