Link to home
Avatar of TKS25

asked on

Clariion CX4 iSCSI Performance


Hoping there are some people with good experience implementing iSCSI on a EMC Clariion CX4's.

Basically the problem is excessive amount of outbound discards on the switchports connected to hosts in an iSCSI environment.

A brief overview of the configuration...

- 2 x 10Gb ports per SP (different subnet per port, with 2 subnets across entire SAN)
- 2 x Catalyst 3750X in a Stack (dedicated to iSCSI, no VLAN's configured, all ports native VLAN 1)
- Server 2008 R2 connected using 2 x Intel ET adapters

                           SPA                SPB
                      10Gb-10Gb    10Gb-10Gb
                          |          |             |         |
                          Cisco Catalyst 3750X
                                        |      |
                         Server (Microsoft iSCSI)

Port counters on the switch interfaces connected to the server are showing a high amount of outbound discards particularly when performing large sequential reads from the SAN (backups etc). My theory is that the SAN is sending data at 10Gb speeds (although obviously determined by the disks in the back-end) which is overwhelming the capabilities of the 1Gb ports. SNMP on the switch is not showing that the 1Gb ports are being overutilized but I suspect this could be microburts which are not displayed by SNMP monitoring.

My question is, if the problem above is an accurate diagnosis, how do you prevent the 1Gb ports from being overwhelmed by the 10Gb ports? I have 'flowcontrol receive desired' configured on the switch interfaces...will flowcontrol only function correctly if the ports are set to full auto-negotiation? The 1Gbps ports are set to full auto but the 10Gb ports on the SAN cannot be set to full auto and have to be configured to 10Gb. Also, not seeing any PAUSE frames in the show flow control command on the Catalyst.

Other factors to note are...

- Jumbo frames for 9000 bytes configured (can ping with packet size of 8972 when running ping -f to the SP ports from the Windows host)
- TcpDelAckTicks set to 0 in registry (emc150702)
- TcpAckFrequency set to 1 in registry (emc150702)
- iSCSIDisableNagle created in registry (emc150702)

I'm not expecting an answer from the limited detail I've provided above but really hoping I can get onto someone who has a lot of knowledge in this area so it points me in the right direction.

Thanks in advance for your help!
Avatar of m2vikram
Flag of United Kingdom of Great Britain and Northern Ireland image

Regardless of your controller speed on the SAN have (10 GB) your connecting speed is determined by speed of the Switch Port. I believe Catalyst 3750 does not have 10GB Switch Ports. (Not sure though)

I recommend you identify the Switch Port Speed first. Then set your NIC speeds on the server, SAN controllers and Switch Port to Match them precisely.
Eg. if Switch has 1 GB speed . Set to 1 G Full, NIC 1 G Full San controllers 1 G Full. Enable Jumbo Frame support in NIC Drivers in windows and check if they are supported by your SAN Controllers too (Most likely they should).

Check for discards again..
Avatar of rfc1180
Flag of United States of America image

Blurred text
View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Avatar of TKS25


Thanks for your suggestions.

The Cataylyst 3750X is a 10G capable switch via an addiitional module so I can confirm that the SAN is def connected to the switch at 10GBps.

rfc1180 - What would you call a low amount of discards? I've not really worked on an issue like this before so I don't know what to expect. We're getting discards in the thousands during a large data transfer. Would this situation be eased by a switch with extremely large port buffers? I believe that the Catalyst has a 3MB buffer that's shared between all 24 1G ports in the switch (10G ports have a seperate ASIC).

Already running PRTG so will change the scan interval as suggested and see if it reveals somewhat more!

I'd imagine this issue is prevalent/possible in any iSCSI solution especially considering 10GB is being pushed by vendors. Be interested to know how people in an Equallogic environment address this as there's no option to switch to FC! I tell you...whoever said iSCSI is simpler than FC must not have had the experiences I've had with iSCSI.

View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
Avatar of Qlemo
This question has been classified as abandoned and is being closed as part of the Cleanup Program. See my comment at the end of the question for more details.