Generic Question About iSCSI and zoning on SAN

ciphron
ciphron used Ask the Experts™
on
We currently have a SAN set up via Fibre Channel switch. We are coming close to saturating the two SAN switches, so we are considering implementing iSCSI for a portion of the traffic. I'm completely new to iSCSI and other than knowing that its capable of 10Gb/s bandwidth and that it can run over copper, I'm pretty clueless. I've read articles that say you can use any old switch for iSCSI, but how would you go about setting up zoning like you can/have to do in a FB SAN switch? I realize all this is dependent on vendor, but I'm looking for the theory/practice behind setting it up.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
>We are coming close to saturating the two SAN switches,
With respect,  I doubt it, unless you're running really, really, really old Fibre Channel kit. Or you're NASA. What is it that makes you say that you're saturating the switches?

so we are considering implementing iSCSI for a portion of the traffic. I'm completely new to iSCSI and other than knowing that its capable of 10Gb/s bandwidth and that it can run over copper, I'm pretty clueless. I've read articles that say you can use any old switch for iSCSI, but how would you go about setting up zoning like you can/have to do in a FB SAN switch? I realize all this is dependent on vendor, but I'm looking for the theory/practice behind setting it up.
iSCSI runs over IP on top of Ethernet, so in theory any Ethernet switch will do the job. There is no notion of zoning for iSCSI as it's IP-based. Even over 10Ge, iSCSI is typically slower than 4Gb Fibre Channel because of packetisation overheads and so on. I very much doubt that splitting off some of the SAN traffic to iSCSI will be much help.

What exactly are the issues you've got?

What storage array do you have?

What SAN switches do you have?

What is the workload you're running?
Most Valuable Expert 2013
Top Expert 2013
Commented:
Do you know this article?

http://www.cuttedge.com/files/iscsi_vs_fiberchannel_explain.pdf

A bit "iSCSI minded", but very informative nonetheless.

I think with "saturating the two SAN switches" you're not talking about insufficient bandwidth, but rather about the fact that you're running out of ports.

In my opinion purchasing new switches with a higher port density or adding two small, inexpensive switches via ISL (rather: ISL trunk) to serve low-demanding machines could be a better alternative than having to get acquainted with iSCSI - although it's Ethernet based it's somehow a "new" technology for you, after all.

wmp
Jim MillardSenior Solution Engineer
Commented:
Aside from the Layer 2 and 3 differences between the two block protocols, the iSCSI protocol differs in that there's no zoning.

Targets (the storage LUN) and initiators (the host using the LUN) share the same switch fabric (routing is frowned upon because of introduced latency, if nothing else) and access to a target by a given initiator is restricted by any one of several policies that get implemented by the target: initiator iqn filter, ip address filter, etc. Additionally, CHAP login can typically be added as an additional layer of security.
11/26 Forrester Webinar: Savings for Enterprise

How can your organization benefit from savings just by replacing your legacy backup solutions with Acronis' #CyberProtection? Join Forrester's Joe Branca and Ryan Davis from Acronis live as they explain how you can too.

Author

Commented:
meyersd - really? wow i had no idea that it would be so slow in comparison. It doesn't really make it worth it to put in a 10G core if I can do the same bandwidth as 4G fibre.
We have a CX3-20 and a VNX5500 with a 4G SAN switch, the processing power and the bandwidth that they are offering seems to be sufficient for now, but seeing as how we will be adding an entirely new blade enclosure and possibly another SAN sometime in the near future I wanted to know the options that were available to me. Seeing as how staying with fibre would require the purchase of additional switches, I was just curious if iSCSI would be a viable free alternative to purchasing more hardware. By sounds of it, it is not. Not if i want to retain or speed traffic up through my fabric. Would you agree?

yes woolmilkporc, i was referring to the fact that i'm running out of ports. I apologize for the confusion. iSCSI is definitely a new technology for me, and I'd like to - if at all possible - stay with the same port type throughout my storage environment. It would make management a lot easier; and its something I'm familiar with, so that would definitely help with the implementation. Not to mention I would not have to reach out to the networking group for these additional ports. The less people are involved, the easier it is to manage. My initial thought was to purchase 2 additional switches and do some ISL trunks between them, putting storage on the new set of switches and the blade enclosures/servers on the old. Our blade enclosures are beginning to get a little old and so i'm not entirely sure as to the compatibility of them with the newest versions of hyper-v and esx; so if we can keep the things that they are connected to the same, i think that would help.


millardjk, so are you saying that zoning can be pseudo set up via port filters and addresses? I suppose that would be possible, but I'm guessing that it would just be easier and faster to configure to allow all connections through the network side of things and restrict particular LUNs to particular hosts for security, would you say thats correct?
Most Valuable Expert 2013
Top Expert 2013

Commented:
>> putting storage on the new set of switches and the blade enclosures/servers on the old<<

Please keep in mind that with this solution all traffic from/to your storage devices has to pass through the ISL trunk.

I'd rather suggest leaving the storage devices plus the most demanding severs connected to one and the same switch and to use the other switch for low-demanding servers.

wmp

Author

Commented:
hmm, well thats a good point, however - if i have two 4Gb/s switch right now and I purchase two 8Gb/s switches, I could just trunk two ISL links to each and get an 8Gb/s link, couldn't I? Also, I know that our VNX is capable of 8Gb/s transfer speeds; so I would at least put it on the secondary switch, not to mention the new blade center (which i would hope would be capable of supporting 8Gb/s FC)
Yes - if you purchased new 8Gb hardware, then it would make sense to move the storage and blade centre to it. Since the cost of FC hardware is comparable to an Ethernet Switch of like performance, it makes sense to stay on FC.

Your idea of trunking two ISLs is good. The alternative approach is to use the ISLs (trunks) to build a meshed FC SAN so that each swith is connected to all others. This gives a greater level of availability and performance as FC understands a switching fabric and so doesn't need spanning tree.
Most Valuable Expert 2013
Top Expert 2013

Commented:
In our company we preferred building two redundant fabrics (consisting of several trunked switches each), with all servers and storage devices being equipped with a connection into either fabric.

This way a possible grave configuration error (who knows?) will just get propagated across one fabric, so that the other one can continue working, providing connectivity for all machines.
This setup also gets you rid of the need for an any-to-any connections.
You would connect devices as if they were two independent fabrics so you preserve the redundancy in the fabric (the switches remember their config and continue to work if they become isolated from the fabric) so you get better redundancy.

The chance of a catastrophic confi error is very small these days now that switches actually check that you want them to merge rather than going ahead and just doing it. I'd suggest that the improved redundancy outweighs the very small risk of a configuration error breaking everything.
About 6 years ago, one of my customers had their Unix admin team looking after the SAN fabric (I never understood why). They had the job of adding a Brocade switches to each fabric, but unfortunately, they didn't understand FC, so thu didn't know to set the FC DomainID before plugging the new switches in (lowest DomainId becomes the principal switch and copies it's config to every other switch in the fabric). They plugged the switches in and so copied a blank configuration over the production SAN switching fabrics.

Oops.

Brocade changed their firmware a little after that so you had to confirm on the command line that you wanted that to happen... :-)

Author

Commented:
Thanks for the input all, I appreciate the assistance in this matter.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial