Slow server since migration to virtual machine manager managed server

CaptainGiblets
CaptainGiblets used Ask the Experts™
on
I have a file server cluster that has been working fine for about a month. The cluster is 2 virtual machines on hyper-v (2016) that have VHD sets attached to them on the hosts. These are imported into cluster manager and I have had no issues with them.

2 days ago, I migrated these new host servers which are in their own cluster configured using MS VMM.

On the original hosts they was a very simple configuration, there were 4 1GB NIC's 2 in a team for the host and 2 in the team for the clients. There was also 2x 10GB adapters in a team which the host used along with a virtual switch connected to this team that the cluster used.

When moving to the new hardware that is configured by VMM, I have tried to segregate networks more. The hosts still have their team on the 2x 1GB. but now the clients 1GB team is separated into the cluster network and client access network.

Since this move, clients have been complaining that connections are dropping and certain apps are crashing out. If I move the app to another server not under the VMM configuration then it works without issue.

What could be causing this? On all our physical network adapters we have VMQ disabled as it caused nothing but lag. I noticed VMQ were enabled on all the virtual adapters created, so I disabled this but that didn't affect the performance.

Maybe I should enable VMQ on physical + virtual? I am not sure I don't want to cause more issues while everything is running.

I have provided some screenshots below of how my VMM is configured.

While the NIC's are teamed on the windows side, on the physical switches they are not set up in a LAG. Should they be? would this make a difference?
ClientAccess.png
ClusterComms.png
hostteam.png
logicalnetworks.png
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
I'm not really sure what's trying to be accomplished here?

The network setup for a cluster is usually quite simple. In most of today's builds we have four 10GbE ports via two NICs. We team two for the host and two for the virtual switch that is not shared with the host OS.

We usually have two switches involved so each port in a team hits a different switch so as to not lose connectivity.

What's the point of providing more networks?

A guest cluster would have a similar requirement. And, since the guest cluster is already running on top of a host cluster there's no real point to adding more network points since resilience is provided by the host nodes. As long as node affinity and failback settings are in place for the guest cluster nodes things should run just fine. KISS is our principle when it comes to our cluster setups. Plus, eliminating Single Points of Failure (SPFs) at the physical host node level.

Author

Commented:
I don't have 4x 10GB ports. My servers only have 2x 10GB which is used exclusively for storage traffic.

my 4x 1GB are split for client / host 2 in a team for the host which will always be VLAN 108 and then 2 for clients, these can be various VLANs, such as 100 for production servers, 102 for cluster communication etc.

I think I found the root of the issue, I noticed on my 1GB switch the ports set up with the new VLANs was reporting millions of oversized packets, I changed the switch from MTU 1518 to 9216 and updated to the latest firmware and the issue seems to have gone. I suspect the VLAN tag was pushing the packet size over 1518?

I am unsure what long term effects of leaving the MTU at 9216 will have though, although the switches are hardly taxed as it is.
Philip ElderTechnical Architect - HA/Compute/Storage

Commented:
Result?
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Sort Name

Open in new window

Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Author

Commented:
On the client it returns

Name                      DisplayName                    DisplayValue                   RegistryKeyword RegistryValue
----                      -----------                    ------------                   --------------- -------------
Ethernet 4                Jumbo Packet                   Disabled                       *JumboPacket    {1514}
Ethernet 7                Jumbo Packet                   Disabled                       *JumboPacket    {1514}

On the host it returns

NIC1                      Jumbo Packet                   1514                           *JumboPacket    {1514}  --This is 10GB
NIC2                      Jumbo Packet                   1514                           *JumboPacket    {1514}  --This is 10GB
SLOT 3 Port 1             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 2             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 3             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 4             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
vEthernet (Host Cluste... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
vEthernet (Host Manage... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
vEthernet (Host Storag... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
Technical Architect - HA/Compute/Storage
Commented:
The MTU on the SLOT 3* ports is probably too small.

Get-NetAdapterAdvancedProperty -Name SLOT* -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 1514
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Sort Name

Open in new window

They should all be the same.

Jumbo Frames/Packets need to be set across the entire network stack, or not, or things behave whacky.

Author

Commented:
It was on the ohysical switches I had to update the mtu. Although I will do what you recommended as well.  Thank you :)

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial