Linux server HBA to Fibre

I have a couple of BladeCenter chassis connected to external fibre channel storage devices via a multi-port optical module on each chassis.
I have some Linux servers which have sata drives basically sitting idle and want to attach them directly to the fibre channel modules using an hba.

I know that Qlogic had a fibre channel HBA which could act as a target using an old Linux driver but am wondering if there have been any improvements since then, about 5 years ago. I thought I read somewhere that Centos 5/6 had a target driver as part of the default setup, nothing special needed.

Can anyone confirm, help me on this?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

It has not changed much.
yum provides */hbacli
Support of adapters is still fairly limited, and why dont you reuse optical wiring for 10GbE and iSCSI?
projectsAuthor Commented:
Only because I have so much fibre hardware so was thinking along those lines.

I do have 10GBe blade modules for some of the HS21 blades. Would it be possible to use an optical pass-through module to a 10GBe switch? If so, then I guess I could pop a 10GBe NIC into which ever server I'd like to use its storage.
projectsAuthor Commented:
What about point to point? Could I connect one of the BladeCenter 4GB module ports to a 4GB HBA on a server for example? I've only ever used external storage, never a standard Linux box for FC storage.
The 7 Worst Nightmares of a Sysadmin

Fear not! To defend your business’ IT systems we’re going to shine a light on the seven most sinister terrors that haunt sysadmins. That way you can be sure there’s nothing in your stack waiting to go bump in the night.

FC was initially intended to be used as general purpose network too, well most adapters even now dont support that mode. In the meantime 10GbE adapters support FCoE (not FC), iSCSI and NFS offload. (you get scsi disk backed by some network storage in the end)
Maybe this?, and slowly turn all clients into NFS or ISCSI and forget FC?

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
projectsAuthor Commented:
Yes, in fact, what I am trying to do is to get away from FC. The setup is that the blades are all vmware esx and their storage/vms are all on xeternal fc storage. I am trying to find a way of migrating to something without fc.

There seems to be some confusion in terms of what the hs21xm blades can handle drive size wise, some saying as much as tb drives. If I could do that, then I could migrate the vms to local storage.

Never simple.
projectsAuthor Commented:
Basically, which ever way I do it, I need to migrate to GBe. Thanks for the input, it basically confirms this.
projectsAuthor Commented:
As a quick fix, I'm looking at maybe using a 4-port Ethernet module, perhaps bonding/teaming the ports to get a faster aggregate speed. On the Linux boxes, I have lots of NICs so could bond 4 ports there also.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux Distributions

From novice to tech pro — start learning today.