Link to home
Create AccountLog in
Avatar of RickEpnet
RickEpnetFlag for United States of America

asked on

VMware and iSCSI SAN

I just installed a P4300 4 Nodes. Each has the NIC bound with ALB. Each not has one NIC going to one switch and the other NIC going to a second switch. Exactly the same switches.

Flow Control and Jumbo frames are enabled.

On the VMware side I have two vmkernel with one active NIC each. In the I/O paths I have that set to round robin.

When I do an IOPS test I get high IOPS with a single NIC with the I/O paths set to fixed than I do with 2 NICs with the I/O path set to round robin. The difference is no huge but there is constantly a difference.
Is that normal if not what could be the reason for that?
Avatar of getzjd
Flag of United States of America image

What happens if you take the switches out of the  mix and  direct connect?  Is that a possibility to test?

What make/model of switches are you running?
Avatar of RickEpnet


The systems are an HP Blade so there is the interconnect switch then it is 2 HP 1810G.
It would be difficult to take the switch out now. But the switch is in both scenarios.
v4 or v5

in v4 iscsi is only done over a single 1g nic

in v5 iscsi can use 16 1gnics or 4 * 10g nics


its also best practice to seperate iscsi onto a seperate dedicated lan and not use a generic lan

also iscsi in v5 can use
With the 1810 switches you're not doing any link aggregation since they don't support stacking of the backplane.  Are you saying that when you do a single nic you're getting better IOPs then with round robin?  If so this could be normal as the packets from the second nic may need to transverse the trunk port between the switches versus communication on the same backplane.
VMware ESXi 5.1
I will read that in the next few days thanks!!

What you say makes since. As far as the switches they work independent of each other the NICs are bound at the storage unites. Each storage unit has one NIC going to one switch and another NIC going the other switch. I have to say I am not really sure how that works out with the Blade system because both NICs pass through the same Interconnect switch. I think it might be better if we had added a second Mezzanine to each blade and another interconnect switch but that would have cost too much. Those interconnect switches cost a lot of $$$.
Avatar of Paul Solovyovsky
Paul Solovyovsky
Flag of United States of America image

Link to home
Create an account to see this answer
Signing up is free. No credit card required.
Create Account
in v5 you dont need to use round robin just have all the nics in the same vmkernal for iscsi software adapter and it will use them all that how v5 works in iscsi
My understanding of v5 is you have to have them separated out. Can you show me some reference maybe on the web of what you are saying?
Link to home
Create an account to see this answer
Signing up is free. No credit card required.
Create Account
My setup is the same as in your in your Articles