snowdog_2112
asked on
vmware vsphere 5.1 iSCSI with jumbo frames - issues
I have a small lab with a mix-n-match of physical hosts, a Netgear GS108T switch, and a Netgear ReadyNAS 2100 in a storage network (i.e., isolated from LAN vSwitch).
All NIC's, switch and NAS support jumbo frames. The ReadyNAS is configured with both NIC's in a LAG, as are the 2 ports on the switch.
I set all MTU's to 9000 (vNIC's, vSwitch, pSwitch, NAS).
I am getting horrendous performance and issues even seeing the LUN's on the NAS.
Is there something else I need to configure?
All NIC's, switch and NAS support jumbo frames. The ReadyNAS is configured with both NIC's in a LAG, as are the 2 ports on the switch.
I set all MTU's to 9000 (vNIC's, vSwitch, pSwitch, NAS).
I am getting horrendous performance and issues even seeing the LUN's on the NAS.
Is there something else I need to configure?
Also check this out. Did you do all these steps that hanccocka points out?
https://www.experts-exchange.com/Software/VMWare/A_9250-HOW-TO-Add-an-iSCSI-Software-Adaptor-and-Create-an-iSCSI-Multipath-Network-in-VMware-vSphere-Hypervisor-ESXi-5-0.html
https://www.experts-exchange.com/Software/VMWare/A_9250-HOW-TO-Add-an-iSCSI-Software-Adaptor-and-Create-an-iSCSI-Multipath-Network-in-VMware-vSphere-Hypervisor-ESXi-5-0.html
unless you have a lot of synchronous data jumbo frames will not improve performance. Configure for 1500 MTU and give it a try. Your lab will be a bit slow due to the hard drives on the NAS, your IOPs will be the limiting factor way before your networking is
Jump frames does not work for everyone.
Are you sure all devices are set for jf?
Otherwise revert back.
Are you sure all devices are set for jf?
Otherwise revert back.
Some of these switches don't do jumbo frames well.
I checked this one does seem to support it.
ASKER
"support" and "play well" seem to be 2 different results. The issue may actually be the LAG on the NAS to the switch.
VMware community suggests ditching the LAG and use MPIO from the VM hosts.
VMware community suggests ditching the LAG and use MPIO from the VM hosts.
Multipath e.g. MPIO is always recommended for VMware vSphere! That's not to say that LAG should not work, but we never setup iSCSI access like this.
HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 4.1
HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client
and jumbo frames, can make improvements for some installations, but we've also seen the opposite effect where jumbo frames can be worse (even when all the equipment has been set for JF!)
HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 4.1
HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client
and jumbo frames, can make improvements for some installations, but we've also seen the opposite effect where jumbo frames can be worse (even when all the equipment has been set for JF!)
ASKER
Waiting for a maintenance window to make these changes. Thanks!!!
No problems, no rush!
ASKER
Configured as MPIO by separating the NIC's on the Netgear SAN, removed the LAG on the switch, and added the Dynamic Discovery on the Storage Adapter page in vCenter.
I have 6 Targets, each with a single LUN (I inherited this config...not sure why it wasn't 1 target with 6 LUN's).
I see a total of 14 paths - instead of either 12 or the 24 I was expecting.
1 of the LUN's (Target 1, LUN1) shows up automatically - the others do not.
I added T6L1 via iSCSI path #2 to Static Discovery, and even then it does not show up as a path.
Thoughts?
I have 6 Targets, each with a single LUN (I inherited this config...not sure why it wasn't 1 target with 6 LUN's).
I see a total of 14 paths - instead of either 12 or the 24 I was expecting.
1 of the LUN's (Target 1, LUN1) shows up automatically - the others do not.
I added T6L1 via iSCSI path #2 to Static Discovery, and even then it does not show up as a path.
Thoughts?
6 LUNs with two iSCSI nics would be 12 paths.
ASKER
That's the problem - I have neither 21 paths, nor the 24 paths I'm expecting - I have 14 paths.
6 LUNS, 2 iSCSI nic's on VM host *PLUS* 2 nic's on the NAS = 24 paths.
6 LUNS, 2 iSCSI nic's on VM host *PLUS* 2 nic's on the NAS = 24 paths.
If you check paths, in Storage Adaptors, which paths are missing.
Also check your SAN, to see which connections you are missing.
You may be best to drop the the NAS LAG, and have two different ports, and two different IP Addresses.
Also check your SAN, to see which connections you are missing.
You may be best to drop the the NAS LAG, and have two different ports, and two different IP Addresses.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
nobody seems to be able to explain how this happens...
Your drives are just not fast enough to justify bound NICs. You will never use all of 1Gig let along 2.
Unbinding the NICs might help with your performance.
Hove you tested the Jumbo frames with Ping? You would need to have a computer on this LAN that supports Jumbo frames to do this.
Can you define horrendous performance?