Slow network access to/from Domain Controller VM hosted on Server 2012 R2 Standard Hyper-V

For a while now we suffer from slow performance on a Windows Server 2012 R2 standard VM running on a Windows Server 2012 R2 Standard Hyper-V Host.
The bare metal machine is a Dell Poweredge R720, 48G RAM E5-2630v2
The VM has 4 vcores assigned with 32G Ram
The problem is, speeds will not get above 15MB/s  and usually are even below 2MB/s. This happens on almost all systems. so far it happens on systems running Windows 7, Windows 8.1 and Ubuntu. A strange thing is that an old Windows 2008 server (which is only domain member now) is achieving the proper speeds of 50 - 100 MB/s. So far this is the only system achieving normal speeds.

Ofcourse i have been searching around already and tried a couple of things to solve it which did not solve it:

- VMQ settings on host NIC:
     Slow VM network performance with Broadcom NIC
- Disable Default Domain Policy Digital Signing
      slow-networksmbcifs-problem/
- Changing physical switch connections to the host (to rule out a faulty switch)
- Switching mainboard and NIC card by Dell (to rule out hardware failure)
- Recreate Network team and Virtual Switch

Anyone has any clue as to what may cause the problem because im starting to pull my hair out?
Robin SchrieversAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Robin CMSenior Security and Infrastructure EngineerCommented:
Different hardware, but: I've had serious network performance issues on a load of R710 Hyper-V hosts with using the Broadcom NetXtreme I driver that comes with Windows Server 2012 R2. Updating to the latest driver from Broadcom fixed it. If you've not already done this, it'd definitely be worth a try.
0
Robin SchrieversAuthor Commented:
I should have put that in my list of things i tried. When we had the hardware replaced, everything was updated to the latest firmware. Im not entirely sure of the current driver version of the Broadcom Netxtreme I drivers, but i atleast installed the latest available version from the Dell support site.
Version installed is 17.0.0.3, driver date 6/3/15

The host system is as fast as it should be btw. ~100MB/s file transfers.
0
Robin CMSenior Security and Infrastructure EngineerCommented:
Are you sharing the NICs that are the virtual switch uplink with the host partition?
What teaming method are you using? Windows or the driver itself?
Can you split the host  onto a dedicated NIC and have another dedicated to the virtual switch, and not use teaming?

It'd be interesting to try reducing the MTU on one of the VMs down to something small-ish like 900 to see if that has any effect.

Check out these perfmon counters and see if they're picking up anything: https://technet.microsoft.com/en-us/library/jj574079.aspx#bkmk_np
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Has Powershell sent you back into the Stone Age?

If managing Active Directory using Windows Powershell® is making you feel like you stepped back in time, you are not alone.  For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why.

Robin CMSenior Security and Infrastructure EngineerCommented:
Note also that if the vSwitch has its own NIC uplink, it is this that you need to do the VMQ disable (or correctly configure VMQ) on, not the one dedicated for host traffic. I think you've probably realised this, but just checking :-)

Personally, I don't like sharing NICs between host traffic and VM traffic. My hosts have a Windows LACP team of two NICs for VM traffic and another 2 NIC LACP team for the host.
0
Robin CMSenior Security and Infrastructure EngineerCommented:
I'm not sure if this is relevant still, but this is an interesting thing to check: How much free resources your host has in terms of RAM (and also CPU, though the article doesn't mention that). In the case of this guy, network packets were being discarded because there was too much stuff running on the host and not enough buffer space available for the network traffic: http://blogs.technet.com/b/rmilne/archive/2014/07/18/retrieving-packets-received-discarded-perfmon-counter-from-multiple-servers.aspx (see the last few paragraphs)
0
Robin SchrieversAuthor Commented:
thanks for your suggestions.
Im gonna try and check those

The teaming is done using Windows networking. not the driver

Ill give an update when i know more
0
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
After disabling VMQ on _all_ Broadcom NIC ports was the host rebooted?

I have an EE article here: Some Hyper-V Hardware and Software Best Practices that may provide some insight on the host's hardware setup. BIOS settings are critical for one. NIC teaming is another.

Is there one or two CPUs in the server? Why 32GB of vRAM for the VM that is operating as DC, File, Print, and ? I suggest bumping that back to 8GB unless there is a specific LoB setup on the VM requiring it. Our virtual DCs run with 2GB-4GB and in some cases 1GB of vRAM assigned as that's all they are doing: ADDS, DNS, DHCP, File, and Print.
0
Robin SchrieversAuthor Commented:
VMQ was disabled on all broadcom nics and indeed rebooted afterwards -> no difference

i've also checked earlier with c3/c6 states disabled for CPU's -> no difference

the server has only one physical CPU installed (6 cores, 2 logical cores per physical).

32GB because the host server is not having any other VM's running. I can lower the amount of memory if that would have a positive impact on the system.
The DC is indeed only doing ADDS, DNS, DHCP file and print and 2 or 3 other low impact procesess.
0
Robin CMSenior Security and Infrastructure EngineerCommented:
Sizing DCs can be quite complex, but the rule of thumb I use is to try and keep all the AD stuff in RAM all the time (plus leave plenty for the OS itself, antivirus, backup agents etc. too). Clearly it depends on the size and complexity of your environment, but RAM is cheap enough that assigning plenty to something as core as a DC helps prevent performance issues across the board.
There's a good sizing resource available here: http://social.technet.microsoft.com/wiki/contents/articles/14355.capacity-planning-for-active-directory-domain-services.aspx#Virtualization_Considerations_for_RAM

If the same (or indeed another) server is doing file serving then any spare RAM will be used to cache files and thus speed up the file server performance too. Frequently accessed files (even large ones like ISOs) can be served entirely from RAM. Users love this, which means less support calls for me :-)
0
Robin CMSenior Security and Infrastructure EngineerCommented:
Don't suppose you can source some Intel NICs, just to rule out Broadcom drivers/hardware?
0
Robin SchrieversAuthor Commented:
I've been working on this last night to try some of the suggestions.

It appears that destroying the NIC team on the host and create it on the VM solved it.
What i did:

- remove the NIC Team on the Hyper-V host
- create 2 vSwitches, both with one physical NIC attached to them
- create two Network adapters on the VM, one connected to vswitch one, the other connected to vswitch two
- create a NIC team on the VM with the two network adapters.

Thanks for the suggestions
0
Robin CMSenior Security and Infrastructure EngineerCommented:
How strange! I'd love somebody from Microsoft to take a look at this and work out what's going on. Glad it's sorted.
0
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
The other option would have been to recreate the team and bind the vSwitch to that team. That may have corrected the problem as well.

It's unfortunate that Broadcom can be such a pain. We only deploy on Intel or Mellanox.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows Server 2012

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.