iSCSI Switch & VLAN configuration

I currently am using a Dell MD3200i iSCSI SAN and 2 host server nodes, running 2008R2. As of right now, the servers are direct attached to the SAN (there's no switch in between)

I wanted to test performance by introducing a switch, to see if things like MPIO boost throughput. I have about 8 ports to spare on my Cisco 3560X so I wanted to use that.

This is the current IP setup

SAN:
Controller 0, Slot 1 - 172.16.1.101/24
Controller 0, Slot 2 - 172.16.2.101/24
Controller 0, Slot 3 - 172.16.3.101/24
Controller 0, Slot 4 - 172.16.4.101/24
Controller 1, Slot 1 - 172.16.1.102/24
Controller 1, Slot 2 - 172.16.2.102/24
Controller 1, Slot 3 - 172.16.3.102/24
Controller 1, Slot 4 - 172.16.4.102/24

Servers:
Node 1, iSCSI NIC 1 - 172.16.1.11/24 (connects to port 0/1)
Node 1, iSCSI NIC 2 - 172.16.2.12/24 (connects to port 1/2)
Node 2, ISCSI NIC 1 - 172.16.2.11/24 (connects to port 1/1)
Node 2, iSCSI NIC 2 - 172.16.1.12/24 (connects to port 0/2)

There is no gateway defined as they are direct connected at the moment.

All documentation I've read show how to cable the switch, the SAN and the host nodes, but there has been no mention of how to configure the switch. I'm a little confused as to how we're to isolate iSCSI traffic from production, yet allow the multiple iSCSI subnets to communicate with each other.

Does there need to be one big VLAN, or 4 seperate little VLAN's?

Should there be more than just the 2 paths in the iSCSI initiator that we currently have on each server once everything goes to a switch?

I'm just a little confused as to how the networking will work with a switch, instead of direct connecting. I do know i need to enable jumbo frames/set the MTU on the switch, as well as enabling flowcontrol, and portfasting the iSCSI ports on the switch. I'm just hazy on the proper network settings and getting the iSCSI initiator properly set up once using a switch.
LVL 5
HornAlumAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

amwinsitCommented:
If you only have one switch i wouldn't reccomend going a switched configuration have you done any performance monitoring to see if performance issuse are related to throughput?

I am not a Dell storage admin so forgive me if this is obvious but why do you need to use a switch to get mulitpathing? Are you using the Native multipathing in 2008 R2 or an app porvided by Dell?

 If you want to you would want all of the iSCSI ports that will communicate with each other on the same subnet/vlan. You don't want the iSCSI traffic to have to be routed between the VLANS. Once you are setup on the switch you will have a path from each server side port to each port on the MDS so most likely 4 paths.
0
HornAlumAuthor Commented:
I'm not sure. Ownership of a LUN is assigned to only one raid controller at a time. Each server node has 1 connection to each RAID Controller ... so really, there is only 1 path to the data LUN on each server.

I've only seen writes at about 100-120 MB/sec and Read's at 80-100 MB/sec, at the 128, 256, 512 and 1024k block sizes using ATTO disk benchmark.

the SAN has 12x 15k RPM SAS drives in RAID 6.

I'm trying to squeeze better performance/throughput out of this configuration.
0
amwinsitCommented:
okay since you are maxing out the GigaBit connection and based on your write performance multipathing should increase performance.

Since you have two connections to each controller you should be able to do multipathing between the two connections to the SAN. If you are working with windows multipathing you should be able to use round robin with subset of path option with you current configuration and that should provide better performance and availablity than going through the single switch.
0
10 Tips to Protect Your Business from Ransomware

Did you know that ransomware is the most widespread, destructive malware in the world today? It accounts for 39% of all security breaches, with ransomware gangsters projected to make $11.5B in profits from online extortion by 2019.

HornAlumAuthor Commented:
There's 1 connection to each controller, not 2. 1 NIC goes to controller 0, 1 to controller 1. I'm trying to figure out how to get more than 1 connection to each controller, and my understanding was introducing a switch would help

Maybe this image would help?
directconnections.jpg
0
amwinsitCommented:
Sorry i missunderstood the connectivity.

Since the lun can only be accessed on one controller at time and you only have one connection to the active controller then yes you will need to go through a switch or add addtional iSCSI ports in your servers so that you have multiple connections to the controller that owns the lun.

For the switched config you will need to have all the iSCSI HBAs and all the ports on the controllers on the same subnet so they can all communicate on layer2 if they have to be routed you will get even lower throughput. Once all of the interfaces are on the same subnet you can put them into the same VLAN on the switch and you will be able to multipath.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
HornAlumAuthor Commented:
From everything I've heard, I thought multiple ports on the same server could not be on the same subnet, or the cluster validation wizards will balk at you.

I also was speaking to a Dell tech yesterday, and they informed me that the opposing subnets won't be able to ping each other, but the MPIO DSM's will handle the pathing correctly. Anyone have experience with that?
0
amwinsitCommented:
i know that it will work with multiple ports on the same subnet but i cant speak to the validation wizard balking about it. I can tell you that if you do seperate subnets then you wont be able to multipath with your current setup, you will need to add additional connections to the storage system. Because you need at least 2 connections that can access the controller that owns your data lun.  Depending on the application you are running in the cluster you can split the data luns across both controllers to get better throughput by utilizing both connections which could be a great solution for your current config.
0
HornAlumAuthor Commented:
I'm going to try setting up a few different VLAN's, because Dell's multipathing doesn't seem to work unless you use different subnets for each controller port. I'll probably have 4 VLAN's total, 2 on each switch. Probably gonna buy 2 cisco 2960's, as they have 8 ports each (well 7 + 1 dualpurpose)

Thanks for the input!
0
HornAlumAuthor Commented:
Dell wants multiple subnets, even on a switched configuratino. the direct access connection suggestion was correct.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Storage

From novice to tech pro — start learning today.