Link to home
Start Free TrialLog in
Avatar of pwm001
pwm001

asked on

AX4 Dual Ch iSCSI with ESXi server

We are getting set to migrate some servers to VMware ESXi and NAS storage using an EMC AX4 with dual processors.  After performing several speed test we determined that using raw RDMs was slightly faster and used less CPU than having each guest OS use their own iSCSI initiator.  However, I'd like advice on configuration of the server side topology - for presentation of the iSCSI to the ESX host.  

We have a completely independent iSCSI fabric using two redundant Gigabit Switches and two processors on the EMC Clariion AX4.  And each server has two dedicated nics for iSCSI traffic.  Our concept was to use two subnets, one for each processor.  This would make the four iSCSI targets:

A0=172.31.1.150
A1=172.31.1.151
B0=172.31.2.150
B1=172.31.2.151

In further tests, we set up two independent ESXi servers.  One configured with a second virtual switch using two physical adapters bound to it.  And a second using a second and third virtual switch, each with a single nic bound to them.  Both of these configurations works without error.  However, we have conflicting test data between iometer, jetstress, sqlio, and the VMware performance indicators.  So, I like to get some real world feedback from users with similar setups or experience.

What are the pros and cons of these two configurations:

ESXi #1
vSwitch 0 = vmnic0 & vmnic1 (for LAN traffic)
vSwitch 1 = vmnic2 & vmnic3 (for iSCSI traffic to both subnets)

ESXi #2
vSwitch 0 = vmnic0 & vmnic1 (for LAN traffic)
vSwitch 1 = vmnic2 (for iSCSI traffic to 1st subnet)
vSwtich 2=  vmnic3 (for iSCSI traffic to 2nd subnet)

Any additional comments or advice welcome.  Thanks in advance.
Avatar of kumarnirmal
kumarnirmal
Flag of India image

How are you planning to use your SAN ? In Active Active Mode or Active Passive Mode ?

When choosing RDM feature, please bear in mind that choosing Virtual Compatibility Mode would allow you to make use of Snapshot feature which is not available in the Physical Compatibility Mode.

Another suggestion and as per VMware, the best practices indicate to segregate your Service Console, VM and iSCSI Traffic using separate NICs and also using VLANs.
Avatar of pwm001
pwm001

ASKER

I'm unable to find a setting for either active/active or active/passive on the EMC AX-4.  This system comes with Navisphere Express.  It may take the full blown Navisphere to control that.  I'll attempt to find out what it's default is - as obviouslly this is moot for an active/passive scenario.  

The traffic is going to be segregated.  The iSCSI is on two completely independant switches.
Avatar of pwm001

ASKER

The Clariion is Active/Active.
ASKER CERTIFIED SOLUTION
Avatar of aldanch
aldanch
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of pwm001

ASKER

aldanch,

Great idea seperating motherboard/cards.  On one of our servers are 2 & 2.  The other 4 servers have 4 on-board nics.

But really my question is regarding throughput not redundacy, as we have two complete fully redundant paths to the LUNS in either scenario.  Reading through the muli-vendor post that lightbulb recommended is pretty comprehensive, but intense.  From this, I believe they are saying that ESX (and ESXi) 3.5 can only establish a single connection to each SCSI target.  And a session must take place on a single connection - meaning not concurrently accross two connections.  Therefore, the maximum throughput will be that of a single TCP connection - in my case 1GB.  Link aggregation will not increase the throughput and load balancing is best handled through manual route targeting.

So in the case of Exchagne & SQL (or large intense database type) traffic, using the ESX initiator is probably NOT the best method to use.  Although our test indicated this to have higher throughput per connection, the ESX server doesn't handle MPIO in the same method as PowerPath would in the guest OS.  AT THIS TIME...of course!

All this seems to make theoretical sense, however...conflicting test data remains.  

What other setups are users having success with and there pros and cons.  For instance, the two setups of aldanch.
Avatar of pwm001

ASKER

Didn't answer my complete question, but very good information.