We have a 2 x node Dell R520 server 2012 Hyper-v setup. Clustering not setup yet.
Each server has the following nic specs:
2 x on-board Broadcom’s
1 x 4 port Intel.
2 of the ports on the Intel nic are used strictly for IScsi traffic
The remaining nics are used in a LACP team. VMQ has been disabled. The LACP team is connected to 2 x Stacked 3750 Cisco switches. All appropriated Trunk ports, speed, media, vlans and MTU have been applied to the switch config and ensure ISCSI traffic is restricted to the VLANS.
A V-switch has been setup with standard networks connected to it (Management, Migration and Cluster)
All networks are on separate subnets and Vlans.
ISCSI is provided via an ISCSI Lun on a Synology RS10613xs+ Cluster with 15k 900gb drives with MPIO enabled and 2 nics on each node dedicated to ISCSI vlans.
ISCSI targets are successfully added to each Hyper-v node and MPIO enabled. Have verified I can see the 2 sessions for each target from each node. Jumbo frames enabled on each nic on the Hyper-v nodes along with the SAN and the switches. Testing confirmed.
Initial testing is ok, performance is good but can be better.
Our problem is that the moment we attempt to read or write anything to the ISCSI volumes (STD file copy), all VMs grind to a halt. Pausing or stopping the file copy to or from the ISCSI volumes restores normal connectivity on the VM’s.
Examining the Synology cluster we can see it’s under no stress and happily serving the LUNS.
Confirmed the traffic for both the file copy and VM traffic is only occurring on the ISCSI networks / VLANS across the hyper-v nodes and the SAN.
The VM’s that are currently running are under no stress or load.
A simple copy from the ISCSI volume and to the volume gives an average of 600MB/S (while peaking to 800MB/s at times)
We are at a loss as to where the congestion / bottleneck is coming from or occurring.
Any help is much appreciated.