I have a file server cluster that has been working fine for about a month. The cluster is 2 virtual machines on hyper-v (2016) that have VHD sets attached to them on the hosts. These are imported into cluster manager and I have had no issues with them.
2 days ago, I migrated these new host servers which are in their own cluster configured using MS VMM.
On the original hosts they was a very simple configuration, there were 4 1GB NIC's 2 in a team for the host and 2 in the team for the clients. There was also 2x 10GB adapters in a team which the host used along with a virtual switch connected to this team that the cluster used.
When moving to the new hardware that is configured by VMM, I have tried to segregate networks more. The hosts still have their team on the 2x 1GB. but now the clients 1GB team is separated into the cluster network and client access network.
Since this move, clients have been complaining that connections are dropping and certain apps are crashing out. If I move the app to another server not under the VMM configuration then it works without issue.
What could be causing this? On all our physical network adapters we have VMQ disabled as it caused nothing but lag. I noticed VMQ were enabled on all the virtual adapters created, so I disabled this but that didn't affect the performance.
Maybe I should enable VMQ on physical + virtual? I am not sure I don't want to cause more issues while everything is running.
I have provided some screenshots below of how my VMM is configured.
While the NIC's are teamed on the windows side, on the physical switches they are not set up in a LAG. Should they be? would this make a difference?