Link to home
Start Free TrialLog in
Avatar of projects
projects

asked on

Linux server HBA to Fibre

I have a couple of BladeCenter chassis connected to external fibre channel storage devices via a multi-port optical module on each chassis.
I have some Linux servers which have sata drives basically sitting idle and want to attach them directly to the fibre channel modules using an hba.

I know that Qlogic had a fibre channel HBA which could act as a target using an old Linux driver but am wondering if there have been any improvements since then, about 5 years ago. I thought I read somewhere that Centos 5/6 had a target driver as part of the default setup, nothing special needed.

Can anyone confirm, help me on this?
Avatar of gheist
gheist
Flag of Belgium image

It has not changed much.
yum provides */hbacli
Support of adapters is still fairly limited, and why dont you reuse optical wiring for 10GbE and iSCSI?
Avatar of projects
projects

ASKER

Only because I have so much fibre hardware so was thinking along those lines.

I do have 10GBe blade modules for some of the HS21 blades. Would it be possible to use an optical pass-through module to a 10GBe switch? If so, then I guess I could pop a 10GBe NIC into which ever server I'd like to use its storage.
What about point to point? Could I connect one of the BladeCenter 4GB module ports to a 4GB HBA on a server for example? I've only ever used external storage, never a standard Linux box for FC storage.
ASKER CERTIFIED SOLUTION
Avatar of gheist
gheist
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yes, in fact, what I am trying to do is to get away from FC. The setup is that the blades are all vmware esx and their storage/vms are all on xeternal fc storage. I am trying to find a way of migrating to something without fc.

There seems to be some confusion in terms of what the hs21xm blades can handle drive size wise, some saying as much as tb drives. If I could do that, then I could migrate the vms to local storage.

Never simple.
Basically, which ever way I do it, I need to migrate to GBe. Thanks for the input, it basically confirms this.
As a quick fix, I'm looking at maybe using a 4-port Ethernet module, perhaps bonding/teaming the ports to get a faster aggregate speed. On the Linux boxes, I have lots of NICs so could bond 4 ports there also.