Link to home
Start Free TrialLog in
Avatar of HornAlum
HornAlumFlag for United States of America

asked on

iSCSI Switch & VLAN configuration

I currently am using a Dell MD3200i iSCSI SAN and 2 host server nodes, running 2008R2. As of right now, the servers are direct attached to the SAN (there's no switch in between)

I wanted to test performance by introducing a switch, to see if things like MPIO boost throughput. I have about 8 ports to spare on my Cisco 3560X so I wanted to use that.

This is the current IP setup

SAN:
Controller 0, Slot 1 - 172.16.1.101/24
Controller 0, Slot 2 - 172.16.2.101/24
Controller 0, Slot 3 - 172.16.3.101/24
Controller 0, Slot 4 - 172.16.4.101/24
Controller 1, Slot 1 - 172.16.1.102/24
Controller 1, Slot 2 - 172.16.2.102/24
Controller 1, Slot 3 - 172.16.3.102/24
Controller 1, Slot 4 - 172.16.4.102/24

Servers:
Node 1, iSCSI NIC 1 - 172.16.1.11/24 (connects to port 0/1)
Node 1, iSCSI NIC 2 - 172.16.2.12/24 (connects to port 1/2)
Node 2, ISCSI NIC 1 - 172.16.2.11/24 (connects to port 1/1)
Node 2, iSCSI NIC 2 - 172.16.1.12/24 (connects to port 0/2)

There is no gateway defined as they are direct connected at the moment.

All documentation I've read show how to cable the switch, the SAN and the host nodes, but there has been no mention of how to configure the switch. I'm a little confused as to how we're to isolate iSCSI traffic from production, yet allow the multiple iSCSI subnets to communicate with each other.

Does there need to be one big VLAN, or 4 seperate little VLAN's?

Should there be more than just the 2 paths in the iSCSI initiator that we currently have on each server once everything goes to a switch?

I'm just a little confused as to how the networking will work with a switch, instead of direct connecting. I do know i need to enable jumbo frames/set the MTU on the switch, as well as enabling flowcontrol, and portfasting the iSCSI ports on the switch. I'm just hazy on the proper network settings and getting the iSCSI initiator properly set up once using a switch.
Avatar of amwinsit
amwinsit

If you only have one switch i wouldn't reccomend going a switched configuration have you done any performance monitoring to see if performance issuse are related to throughput?

I am not a Dell storage admin so forgive me if this is obvious but why do you need to use a switch to get mulitpathing? Are you using the Native multipathing in 2008 R2 or an app porvided by Dell?

 If you want to you would want all of the iSCSI ports that will communicate with each other on the same subnet/vlan. You don't want the iSCSI traffic to have to be routed between the VLANS. Once you are setup on the switch you will have a path from each server side port to each port on the MDS so most likely 4 paths.
Avatar of HornAlum

ASKER

I'm not sure. Ownership of a LUN is assigned to only one raid controller at a time. Each server node has 1 connection to each RAID Controller ... so really, there is only 1 path to the data LUN on each server.

I've only seen writes at about 100-120 MB/sec and Read's at 80-100 MB/sec, at the 128, 256, 512 and 1024k block sizes using ATTO disk benchmark.

the SAN has 12x 15k RPM SAS drives in RAID 6.

I'm trying to squeeze better performance/throughput out of this configuration.
okay since you are maxing out the GigaBit connection and based on your write performance multipathing should increase performance.

Since you have two connections to each controller you should be able to do multipathing between the two connections to the SAN. If you are working with windows multipathing you should be able to use round robin with subset of path option with you current configuration and that should provide better performance and availablity than going through the single switch.
There's 1 connection to each controller, not 2. 1 NIC goes to controller 0, 1 to controller 1. I'm trying to figure out how to get more than 1 connection to each controller, and my understanding was introducing a switch would help

Maybe this image would help?
directconnections.jpg
ASKER CERTIFIED SOLUTION
Avatar of amwinsit
amwinsit

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
From everything I've heard, I thought multiple ports on the same server could not be on the same subnet, or the cluster validation wizards will balk at you.

I also was speaking to a Dell tech yesterday, and they informed me that the opposing subnets won't be able to ping each other, but the MPIO DSM's will handle the pathing correctly. Anyone have experience with that?
i know that it will work with multiple ports on the same subnet but i cant speak to the validation wizard balking about it. I can tell you that if you do seperate subnets then you wont be able to multipath with your current setup, you will need to add additional connections to the storage system. Because you need at least 2 connections that can access the controller that owns your data lun.  Depending on the application you are running in the cluster you can split the data luns across both controllers to get better throughput by utilizing both connections which could be a great solution for your current config.
I'm going to try setting up a few different VLAN's, because Dell's multipathing doesn't seem to work unless you use different subnets for each controller port. I'll probably have 4 VLAN's total, 2 on each switch. Probably gonna buy 2 cisco 2960's, as they have 8 ports each (well 7 + 1 dualpurpose)

Thanks for the input!
Dell wants multiple subnets, even on a switched configuratino. the direct access connection suggestion was correct.