Slow server since migration to virtual machine manager managed server

I have a file server cluster that has been working fine for about a month. The cluster is 2 virtual machines on hyper-v (2016) that have VHD sets attached to them on the hosts. These are imported into cluster manager and I have had no issues with them.

2 days ago, I migrated these new host servers which are in their own cluster configured using MS VMM.

On the original hosts they was a very simple configuration, there were 4 1GB NIC's 2 in a team for the host and 2 in the team for the clients. There was also 2x 10GB adapters in a team which the host used along with a virtual switch connected to this team that the cluster used.

When moving to the new hardware that is configured by VMM, I have tried to segregate networks more. The hosts still have their team on the 2x 1GB. but now the clients 1GB team is separated into the cluster network and client access network.

Since this move, clients have been complaining that connections are dropping and certain apps are crashing out. If I move the app to another server not under the VMM configuration then it works without issue.

What could be causing this? On all our physical network adapters we have VMQ disabled as it caused nothing but lag. I noticed VMQ were enabled on all the virtual adapters created, so I disabled this but that didn't affect the performance.

Maybe I should enable VMQ on physical + virtual? I am not sure I don't want to cause more issues while everything is running.

I have provided some screenshots below of how my VMM is configured.

While the NIC's are teamed on the windows side, on the physical switches they are not set up in a LAG. Should they be? would this make a difference?
ClientAccess.png
ClusterComms.png
hostteam.png
logicalnetworks.png
LVL 6
CaptainGibletsAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Philip ElderTechnical Architect - HA/Compute/StorageCommented:
I'm not really sure what's trying to be accomplished here?

The network setup for a cluster is usually quite simple. In most of today's builds we have four 10GbE ports via two NICs. We team two for the host and two for the virtual switch that is not shared with the host OS.

We usually have two switches involved so each port in a team hits a different switch so as to not lose connectivity.

What's the point of providing more networks?

A guest cluster would have a similar requirement. And, since the guest cluster is already running on top of a host cluster there's no real point to adding more network points since resilience is provided by the host nodes. As long as node affinity and failback settings are in place for the guest cluster nodes things should run just fine. KISS is our principle when it comes to our cluster setups. Plus, eliminating Single Points of Failure (SPFs) at the physical host node level.
0
CaptainGibletsAuthor Commented:
I don't have 4x 10GB ports. My servers only have 2x 10GB which is used exclusively for storage traffic.

my 4x 1GB are split for client / host 2 in a team for the host which will always be VLAN 108 and then 2 for clients, these can be various VLANs, such as 100 for production servers, 102 for cluster communication etc.

I think I found the root of the issue, I noticed on my 1GB switch the ports set up with the new VLANs was reporting millions of oversized packets, I changed the switch from MTU 1518 to 9216 and updated to the latest firmware and the issue seems to have gone. I suspect the VLAN tag was pushing the packet size over 1518?

I am unsure what long term effects of leaving the MTU at 9216 will have though, although the switches are hardly taxed as it is.
0
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Result?
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Sort Name

Open in new window

0
Newly released Acronis True Image 2019

In announcing the release of the 15th Anniversary Edition of Acronis True Image 2019, the company revealed that its artificial intelligence-based anti-ransomware technology – stopped more than 200,000 ransomware attacks on 150,000 customers last year.

CaptainGibletsAuthor Commented:
On the client it returns

Name                      DisplayName                    DisplayValue                   RegistryKeyword RegistryValue
----                      -----------                    ------------                   --------------- -------------
Ethernet 4                Jumbo Packet                   Disabled                       *JumboPacket    {1514}
Ethernet 7                Jumbo Packet                   Disabled                       *JumboPacket    {1514}

On the host it returns

NIC1                      Jumbo Packet                   1514                           *JumboPacket    {1514}  --This is 10GB
NIC2                      Jumbo Packet                   1514                           *JumboPacket    {1514}  --This is 10GB
SLOT 3 Port 1             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 2             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 3             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
SLOT 3 Port 4             Jumbo Mtu                      1500                           *JumboPacket    {1500}  --This is 1GB
vEthernet (Host Cluste... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
vEthernet (Host Manage... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
vEthernet (Host Storag... Jumbo Packet                   Disabled                       *JumboPacket    {1514}
0
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
The MTU on the SLOT 3* ports is probably too small.

Get-NetAdapterAdvancedProperty -Name SLOT* -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 1514
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Sort Name

Open in new window

They should all be the same.

Jumbo Frames/Packets need to be set across the entire network stack, or not, or things behave whacky.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
CaptainGibletsAuthor Commented:
It was on the ohysical switches I had to update the mtu. Although I will do what you recommended as well.  Thank you :)
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows Server 2016

From novice to tech pro — start learning today.