Link to home
Start Free TrialLog in
Avatar of Robin Schrievers
Robin Schrievers

asked on

Slow network access to/from Domain Controller VM hosted on Server 2012 R2 Standard Hyper-V

For a while now we suffer from slow performance on a Windows Server 2012 R2 standard VM running on a Windows Server 2012 R2 Standard Hyper-V Host.
The bare metal machine is a Dell Poweredge R720, 48G RAM E5-2630v2
The VM has 4 vcores assigned with 32G Ram
The problem is, speeds will not get above 15MB/s  and usually are even below 2MB/s. This happens on almost all systems. so far it happens on systems running Windows 7, Windows 8.1 and Ubuntu. A strange thing is that an old Windows 2008 server (which is only domain member now) is achieving the proper speeds of 50 - 100 MB/s. So far this is the only system achieving normal speeds.

Ofcourse i have been searching around already and tried a couple of things to solve it which did not solve it:

- VMQ settings on host NIC:
     Slow VM network performance with Broadcom NIC
- Disable Default Domain Policy Digital Signing
      slow-networksmbcifs-problem/
- Changing physical switch connections to the host (to rule out a faulty switch)
- Switching mainboard and NIC card by Dell (to rule out hardware failure)
- Recreate Network team and Virtual Switch

Anyone has any clue as to what may cause the problem because im starting to pull my hair out?
Avatar of Robin CM
Robin CM
Flag of United Kingdom of Great Britain and Northern Ireland image

Different hardware, but: I've had serious network performance issues on a load of R710 Hyper-V hosts with using the Broadcom NetXtreme I driver that comes with Windows Server 2012 R2. Updating to the latest driver from Broadcom fixed it. If you've not already done this, it'd definitely be worth a try.
Avatar of Robin Schrievers
Robin Schrievers

ASKER

I should have put that in my list of things i tried. When we had the hardware replaced, everything was updated to the latest firmware. Im not entirely sure of the current driver version of the Broadcom Netxtreme I drivers, but i atleast installed the latest available version from the Dell support site.
Version installed is 17.0.0.3, driver date 6/3/15

The host system is as fast as it should be btw. ~100MB/s file transfers.
ASKER CERTIFIED SOLUTION
Avatar of Robin CM
Robin CM
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Note also that if the vSwitch has its own NIC uplink, it is this that you need to do the VMQ disable (or correctly configure VMQ) on, not the one dedicated for host traffic. I think you've probably realised this, but just checking :-)

Personally, I don't like sharing NICs between host traffic and VM traffic. My hosts have a Windows LACP team of two NICs for VM traffic and another 2 NIC LACP team for the host.
I'm not sure if this is relevant still, but this is an interesting thing to check: How much free resources your host has in terms of RAM (and also CPU, though the article doesn't mention that). In the case of this guy, network packets were being discarded because there was too much stuff running on the host and not enough buffer space available for the network traffic: http://blogs.technet.com/b/rmilne/archive/2014/07/18/retrieving-packets-received-discarded-perfmon-counter-from-multiple-servers.aspx (see the last few paragraphs)
thanks for your suggestions.
Im gonna try and check those

The teaming is done using Windows networking. not the driver

Ill give an update when i know more
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
VMQ was disabled on all broadcom nics and indeed rebooted afterwards -> no difference

i've also checked earlier with c3/c6 states disabled for CPU's -> no difference

the server has only one physical CPU installed (6 cores, 2 logical cores per physical).

32GB because the host server is not having any other VM's running. I can lower the amount of memory if that would have a positive impact on the system.
The DC is indeed only doing ADDS, DNS, DHCP file and print and 2 or 3 other low impact procesess.
Sizing DCs can be quite complex, but the rule of thumb I use is to try and keep all the AD stuff in RAM all the time (plus leave plenty for the OS itself, antivirus, backup agents etc. too). Clearly it depends on the size and complexity of your environment, but RAM is cheap enough that assigning plenty to something as core as a DC helps prevent performance issues across the board.
There's a good sizing resource available here: http://social.technet.microsoft.com/wiki/contents/articles/14355.capacity-planning-for-active-directory-domain-services.aspx#Virtualization_Considerations_for_RAM

If the same (or indeed another) server is doing file serving then any spare RAM will be used to cache files and thus speed up the file server performance too. Frequently accessed files (even large ones like ISOs) can be served entirely from RAM. Users love this, which means less support calls for me :-)
Don't suppose you can source some Intel NICs, just to rule out Broadcom drivers/hardware?
I've been working on this last night to try some of the suggestions.

It appears that destroying the NIC team on the host and create it on the VM solved it.
What i did:

- remove the NIC Team on the Hyper-V host
- create 2 vSwitches, both with one physical NIC attached to them
- create two Network adapters on the VM, one connected to vswitch one, the other connected to vswitch two
- create a NIC team on the VM with the two network adapters.

Thanks for the suggestions
How strange! I'd love somebody from Microsoft to take a look at this and work out what's going on. Glad it's sorted.
The other option would have been to recreate the team and bind the vSwitch to that team. That may have corrected the problem as well.

It's unfortunate that Broadcom can be such a pain. We only deploy on Intel or Mellanox.