IBM SAN implementation


I have been given the task to implement a simple SAN solution to our IBM BladeCenter. The setup is like this:
* One IBM BladeCenter H.
* Two Brocade SAN switches integrated into BladeCenter.
* One IBM DS3400 disk unit with double controllers.

I have some questions regarding configuration of the Brocade switches just to confirm my assumptions:
* Is switch configuration required to establish a fabric or is this just the physical setup?
* I know it's recommended to etablish one zone for each port. We have controller A connected to switch port 15, controller B connected to port 16 and servers on port 1-4. The zones should then be 1-15, 2-15, 3-15, 4-15, 1-16, 2-16, 3-16 and 4-16. Is this correct?
* Can the configuration of switches be syncronized in any way, or should we keep them apart? That is, configure them seperately so they don't know of the others existence?

Any comment  appreciated.
Who is Participating?
dolomitiConnect With a Mentor Commented:
I believe that "All access" masks the zoning and I'ld leave the thinf how was
during operation.
Try to make a test zone, share the volume in the storage manager, dicover disk by server, then erase, better remove the zone from going configuration, and verify in server that disk will be lost.

About second question, it's a correct one.
Some FC devices may have not this futher capability to share itself to specific wwname: Tape, Storage without Partition Features (you have paid the number of Partitions (IBM calls so them) you have on 3400).

Tips: if you have to share a LUN across 2 or more servers (cluster, VMWare), you "pay" just 1 partition: you create a gruop of servers and share the LV to this.

Always  about 2nd q., I am not a guru in FC (I am not a guru in anything, I have experience, but much divided in "too many" fields: this is my market),
but I believe that if you zone at switch level, performances are better:
the storage is not "disturbed" from hosts that have not to speak with him.

Is the same difference between a lan Switch and HUB; if you configure your SAN switch in "All access" everyone see all, and a 2nd level of protocol has to filter frames, as in a LAN-HUB. The difference between LAN switch and SAN one, is that in LAN (avoiding VLAN), the link between MAC-ADDRESS is automatic/dynamic,
in the SAN is manually done by switch(zone) configuration.

About points, I am satisfied if my indications about IBM material, zoning setting, and cable connections, have helped you to start, it's a relational approach.

I believe you alredy have the 46m1363.pdf  IBM System Storage DS3400 Storage Subsystem Installation, User's, and Maintenance Guide (English)

Your theoretical cfg will be that of Fig 47 on page 51.

"Each switch forms its own SAN fabric"

By default, on FC swithces nobody see nobody (differently LAN ones):
to permit port1 see port16, you have to zone it.

I believe you have 4 servers: which type-model and why zones have common ports: which application are you deploying, virtualizzation, VMWare ?

Each server has a FC card, dual channel. Channel-A of server1 goes on port1
of SwitchA, while Channel-B of Srv1 goes in SwB/port1.
Server2..... AB.... port2...

I believe you have to connect just 1 FCcable each DS3400 controller with the following rule.
DS3400-ControllerA-Fibre Channel 1   <------> FCSwitch_A-Port 16
DS3400-ControllerB-Fibre Channel 1   <------> FCSwitch_B-Port 16

Name the zones in the switch with similar/equal name between SwitchA and B

Zone01: 16,1
Zone02: 16,2

Someone prefere create the zone using WWName rather PortNumber:
it is just a style choiche.

It is correct that Switch are NOT synchronized: you have to make changes as:
1) think well modifies/adding and doc them on your documents
2) make operation just in 1 switch, save configuration, enable it
3) verify server side and srotage side that all is still good, logical volumes have not switched from a controller to another, and there are not errors on the server
4) do same oprations on switch2

just a suggestion about firmware/device driver updates :
1) do nothing about above, wire FC and zone all
2) verify all goes how you want: this is enough!
3) do all updates
4) verify again
5) go in production
from this moment, will be difficult operate any changes.


Handy HolderSaggar maker's bottom knockerCommented:
On zoning by port no or WWN, port number is fractionally faster but you wouldn't notice it. Zoning protects you from someone plugging the wrong host into the wrong port and is preferable on big SANs, in your case it is impossible to plug the wrong port in because it is a blade environment so I'd use port numbers unless you intend to join it to a whole load more equipment.

You could join the switches together so change on one is seen by the other but that would defeat the purpose of having dual redundant fabric. It is far safer to have them seperate - configuration failure on one cannot propagate to the other if they are not connected together.
[Webinar] Improve your customer journey

A positive customer journey is important in attracting and retaining business. To improve this experience, you can use Google Maps APIs to increase checkout conversions, boost user engagement, and optimize order fulfillment. Learn how in this webinar presented by Dito.

riegsaAuthor Commented:
Thanks so far, but:
Is it only the "Zone Admin" configuration I should care about and leave the rest by default?
I have of course given the switches Ip adresses, adjusted time/date and so on.
In the IBM documentation, I was directed first to create a "dummy fabric" so that no ports could see each other. I did this on one switch, but were uncertain if this were the right direction to go.
Could some of you provide some hands-on step by step Brocade configuration?
Handy HolderSaggar maker's bottom knockerCommented:
although dolomiti said "By default, on FC swithces nobody see nobody" this isn't true on a Brocade, by default without zoning everything sees everything on a Brocade, so the dummy zone is just to stop everything seeing everything, it isn't needed normally, certainly not in your case. It can be important that when blade SAN swirches are shipped as spares a dummy zone is created before plugging it into the chassis just to stop different flavour OSs seeing each other. VMS may crash if it sees a HP-UX host on the SAN for example.

The default settings are fine, you don't need to change anything unless there are other SAN switches outside the ones in the bladecenter.
riegsaAuthor Commented:

I've made some progress now so I think I have control over things. And I'll assume it's time to round up this question and deliver some points.

But I hope I could ask two more (perhaps stupid) questions:
* Brocade switch - Zone Admin - Zoning Actions - Set Default Mode - No access. To keep configuration safe, I changed this from "All access". Comments?
* In Storage Manager, you can map a logical drive to one host only, and I assume this drive will be invisible to other hosts. Why then bother with all the zone config on the switches?

To dolomiti and andyalder; you have both assisted me with this. Should we split the points? Suggestions?
Handy HolderConnect With a Mentor Saggar maker's bottom knockerCommented:
>"Set Default Mode - No access"
has the same effect as creating a dummy zone. Makes it behave like dolomiti said previously.

>"Why then bother with all the zone config on the switches"
It's a best practice, also although the LUNs aren't seen management traffic is seen and protocols are slightly different on different OSs, Windows and Linux can share a zone but were you to put VMS in the same zone the OSs would see each others management traffic and freak out. Some clustering needs shared zones - all HP NOn-stop servers in a cluster have to be in the same zone for example, but this doesn't apply to Linux and Windows clusters.

There's a whole bunch of zoning rules in but I wouldn't bother to read through it, just do point-to-point zoning like dolomiti said in their first post.
All Courses

From novice to tech pro — start learning today.