?
Solved

Too many ISCSI Multipaths Slow Performance?

Posted on 2014-09-02
21
Medium Priority
?
2,998 Views
Last Modified: 2016-11-23
Hey,

Design: Our SAN has 4 ISCSI NICS. Each ESX Host has 4 NICS dedicated to ISCSI connected to 1 vSwitch (screenshot attached). There is no routable network between the SAN and ESX Hosts, just one flat switch configured for Jumbo Frames and optimized for ISCSI traffic.

We are getting terrible latency and sometimes APD on our ESX Hosts when the SAN experiences high IOPS (from an overnight SAN to SAN replication). When our ESX hosts witness the latency, they curl up in a ball and die (APD).

I wonder whether our VMware ISCSI configuration may have something to do with it. (We are investing in more disks on the SAN to get better IOPS/lower latency).

By using a PSP of Round Robin, we end up with 16 possible paths to the SAN for each SAN volume (4 SAN NICS x 4 VMKNICS = 16 paths).

My Question:-

Should I be spreading the VMKNICS across perhaps 4 vswitches to better handle load balancing? Someone suggested this but I cant find any evidence to support it. Or should we reduce the default Round Robin IOPS from 1000 to a lower number, perhaps 3? (I cant find the recommended setting for a Compellent San but a Dell Equalogic is 3 IOPS). Or can you spot anything else?
Capture.JPG
0
Comment
Question by:klwn
  • 11
  • 7
  • 2
  • +1
21 Comments
 
LVL 5

Assisted Solution

by:AbhishekJha
AbhishekJha earned 250 total points
ID: 40297926
0
 
LVL 124

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 250 total points
ID: 40297942
Have you created your Multipath as follows:-

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0

also enable jumbo fames

HOW TO: Enable Jumbo Frames on a VMware vSphere Hypervisor (ESXi 5.0) host server using the VMware vSphere Client

What is the SAN, are you using their multi-pathing kit ?

Do you need 4 NICs for iSCSI, have you tried two ?
0
 

Author Comment

by:klwn
ID: 40300347
Thank you gentlemen.

You will see that my question is not so much about how to setup Multipathing and Jumbo frames. More however about optimally configuring iscsi multipathing across 4 vmknics to 4 SAN targets.

I need someone with knowledge around VMware PSP and adjusting Round Robing Default IOPS.

I want 4 vmknics from each host to give me throughput - 2 will not suffice.

PS. I have followed VMware best practice guides for "Running VMware on ISCSI" and "Multipathing Configuration for Software ISCSI using Port Binding"
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
LVL 124
ID: 40300387
Have you checked with Dell if there is MULTIPATH MODULE for the SAN you are using?

I know there is for an Equallogic.

Are you sure 2 vmknics will not suffice, have you checked your throughput, and this is not just an assumption?

Have you reduced to 2 nics as a test, as clearly you are having issues with 4!

How many VMs, How many LUNs, How many datastores ?
0
 

Author Comment

by:klwn
ID: 40300397
UPDATE:

Compellent Storage Centre Documentation

"If the Physical Network is comprised of a single subnet for ISCSI, then use a single  vSwitch with two ISCSI virtual ports with the traditional 1:1 mapping. Note that more ISCSI virtual ports can be use if the controller has more front end ports available".

This would suggest that our configuration of 4 VMKNICS on one vSwitch is supported.

Separate vSwitches are only required if the ISCSI comprises of multiple subnets.

Now we need to optimise the RR PSP Policy as I requested help on earlier
0
 
LVL 124
ID: 40300587
I did not state it was not supported, I asked, if you used two vmknics, does the performance get better or worse.

Is there no Compellant Multi-Patch module?

All paths Active (IO)
0
 

Author Comment

by:klwn
ID: 40300596
Here is a look at one of our devices using

esxcli storage nmp device list

Open in new window


naa.6000d31000123f00000000000000010f
   Device Display Name: COMPELNT iSCSI Disk (naa.6000d31000123f00000000000000010                     f)
   Storage Array Type: VMW_SATP_DEFAULT_AA
   Storage Array Type Device Config: SATP VMW_SATP_DEFAULT_AA does not support d                     evice configuration.
   Path Selection Policy: VMW_PSP_RR
   Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useA                     NO=0; lastPathIndex=9: NumIOsPending=0,numBytesPending=0}
   Path Selection Policy Device Custom Config:
   Working Paths: vmhba37:C0:T3:L2, vmhba37:C2:T2:L2, vmhba37:C1:T2:L2, vmhba37:                     C3:T1:L2, vmhba37:C0:T2:L2, vmhba37:C2:T1:L2, vmhba37:C1:T1:L2, vmhba37:C3:T0:L2                     , vmhba37:C0:T1:L2, vmhba37:C2:T0:L2, vmhba37:C1:T0:L2, vmhba37:C0:T0:L2, vmhba3                     7:C3:T3:L2, vmhba37:C2:T3:L2, vmhba37:C1:T3:L2, vmhba37:C3:T2:L2
   Is Local SAS Device: false
   Is Boot USB Device: false

I have heard it suggested to adjust the number of bytes in the policy so a new path is chosen every time a jumbo frame is processed instead of the number of IOPS. i.e. bytes = 8800 (9000 minus overhead)

Any thoughts?
0
 
LVL 124
ID: 40300597
which version of ESXi are you using ?
0
 

Author Comment

by:klwn
ID: 40300606
Sorry Andrew, just read your reply after I posted my last comment. The compellent is an Active-Active storage system where all paths are available all of the time (unless a path fails). I am waiting for a reply from their support but I do not know of any specific modules.

As the question ages, and I find out more information, I am looking how to optimally configure multipathing across 4 vmknics for a Dell Compellent.

4 NICS are required per host, believe me! 38 datastores, 123LUNS, 143vms, 3 DR hosts

ESXi 5.5 U1
0
 

Author Comment

by:klwn
ID: 40300627
making interesting reading, but not compellent specific:-

https://connect.nimblestorage.com/thread/1107
0
 
LVL 124
ID: 40300634
That's not a lot of VMs, and a small installation.

I've seen more VMs and clusters using only  two iSCSI VMKnics with jumbo frames, with no issues, and not saturating the 1GBe uplinks.

Round Robin should be the best path selection policy, if there is no Policy available from Dell, like there is for the Equallogic.

if you select Manage Paths in the vSphere Client for one of your LUNs, I assume it does state Active (IO) for all paths, and not just one path ?
0
 

Author Comment

by:klwn
ID: 40300820
Yes, all paths active as expected.

It is not so much the number of vm's that is the question here when looking at  whether 2 or 4 nics is required, more the IO being generated by the vms.

I haven't asked the question "are 2 ISCSI 1GB Nics enough", that's impossible to answer unless I provide IO statistics, regardless of the size of the environment.

I am looking for some advice on Compellent SAN, multipath. Has anyone had any luck getting an optimum setting by adjusting the IOPS or perhaps bytes limit on the PSP.
0
 
LVL 124
ID: 40300870
To increase IOPS we change the RAID, add disks or SSDs, but recently we've been caching before the iSCSI datastore and SAN.
0
 
LVL 2

Assisted Solution

by:Jim_Nim
Jim_Nim earned 500 total points
ID: 40301543
Changing the VMW_PSP_RR IOPS value to 3 would definitely be worth trying - this would help balance I/O more evenly across all the available paths.

You may also want to ensure that you have a "one-to-one" VMkernel port mapping, where each VMkernel port's failover order has all but 1 NIC set to "unused", and each VMkernel port correlates to a single physical NIC.

Have you contacted Compellent support to ask about this? Or checked their site for any documentation outlining the recommended configuration specifics?
0
 

Author Comment

by:klwn
ID: 40302803
Hi Jim,

Thanks for the comment. I'm interested where you get the figure "3" from? There is a lot of recommendations about settings IOPS to 1 or even 0 but another figure is normally recommended by the SAN vendor - are you referring to an EMC?

We do only have 1 active adapter for each vmknic, and that adapter is different for each of the 4 vmknics on our vswitch (vsphere 5 highlights a configuration issue if you do not correctly configure the correct active/unused adapters in multipathing)

I did just get a reply from Compellent:-

Our documentation (Dell Compellent Best Practices with VMware vSphere 5.X, http://en.community.dell.com/techcenter/extras/m/white_papers/20437942.aspx) technically states that we recommend the default round robin pathing policy, this would mean the 1000 IOPS presetup before swapping path.  With what previous customer experiences as well as online resources I have been able to find is that it is possible to see an increase in performance by adjusting the value to 1, but really only when we are looking at running a single VM as compared to multiple so I don’t think this would be necessary to adjust on your system.

Regarding the number of paths (16 per volume corresponding with the 4 server NICs and 4 Storage NICS), I wouldn’t see that path count being 16 as a problem unless you are concerned that with the MPIO policy we are not truly utilizing all 16 paths.  In that scenario we could certainly give it a try lowering the IOPS the MPIO policy uses before swapping paths, but we do not expect that it would give any benefit to performance.


I guess the only real way to test the IOPS =1 (or 3 as you suggest) would be to do an IO load test on the stack. I know for sure multipathing is working correctly from the values I see using ESXTOP (packets are evenly distributed across VMKNICS).
0
 

Author Comment

by:klwn
ID: 40302851
ESXi hosts using iSCSI/FC/FC/FCoE storage experiences latency issues with no signs of latency on the SAN side. Great little article...

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2069356
0
 

Accepted Solution

by:
klwn earned 0 total points
ID: 40303057
Going to try the following to set all my compellent volumes and the default SATP rule to default to 1IOPS instead of 1000

esxcli storage nmp satp set -P VMW_PSP_RR -s VMW_SATP_DEFAULT_AA

esxcli storage nmp satp rule add -s VMW_SATP_DEFAULT_AA  -V "COMPELNT" -M "Compellent Vol" -P "VMW_PSP_RR" -O "iops=1"

for i in `esxcfg-scsidevs -c |awk '{print $1}' | grep naa.6000d3`; do esxcli storage nmp psp roundrobin deviceconfig set --type=iops --iops=1 --device=$i; done

Open in new window


Where "naa.6000d3" identifies  our Compellent Volumes attached to our ISCSI Initiator
0
 
LVL 2

Assisted Solution

by:Jim_Nim
Jim_Nim earned 500 total points
ID: 40303526
I guess the only real way to test the IOPS =1 (or 3 as you suggest) would be to do an IO load test on the stack
This is exactly what I intended to suggest by saying "would definitely be worth trying". My suggestion of 3 was based on experience with Equallogic iSCSI storage... but the best thing to do is test it out first. Run some benchmarks with an I/O footprint that best represents the production environment you're expecting, and go with the configuration that gives that testing the best performance. (You may also want to run other tests on the extremes of large sequential / small random I/O just to see how the changes affect these as well).
I'm surprised that Compellent doesn't have an official recommendation on this though. If they don't, it may very well have little to no impact on the performance you see with this storage.
0
 

Author Comment

by:klwn
ID: 40303579
Ok, Ill get Veeam up and running again using the new RR PSP IOP of 1 and monitor the latency on my vmknics. I'm guessing queue depths should certainly decrease if i'm switching paths more often. Nothing like testing with real data and scenarios.

In conclusion, Compellent recommend using the default of 1000 IOPS but also have no objection to playing with the IOP numbers on the policy.

Answered really by compellent but spreading the points and love for yours help.
0
 
LVL 124
ID: 40303623
did it solve this:-

terrible latency and sometimes APD on our ESX Hosts when the SAN experiences high IOPS ???
0
 

Author Closing Comment

by:klwn
ID: 40311621
No real documented advice given on IOP settings for a Dell Compellent. Points spread because I valued the input of the few guys who answered.
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When we purchase storage, we typically are advertised storage of 500GB, 1TB, 2TB and so on. However, when you actually install it into your computer, your 500GB HDD will actually show up as 465GB. Why? It has to do with the way people and computers…
Windows Server 2003 introduced persistent Volume Shadow Copies and made 2003 a must-do upgrade.  Since then, it's been a must-implement feature for all servers doing any kind of file sharing.
Teach the user how to configure vSphere clusters to support the VMware FT feature Open vSphere Web Client: Verify vSphere HA is enabled: Verify netowrking for vMotion and FT Logging is in place or create it: Turn On FT for a virtual machine: Verify …
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…
Suggested Courses

864 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question