iSCSI San hardware recommendations

Hello All,
My company is currently looking into centralized storage for our Exchange, FTP and production data.  I've done some research over the past couple of weeks and figured between the budget and performance needs I'm faced with that an iSCSI SAN solution would work best and I'm looking for hardware recommendations for Gigabyte switches, iSCSI SAN devices, HBA / TOE adapters, software or hardware initators and anything else that I might be missing.  

I'd rather stay away from installing software on a Windows base platform that makes it an iSCSI target.  Thanks for any input on this subject.  
Who is Participating?
alfalfa6945Connect With a Mentor Commented:
Curious, but do you have servers or devices external to the network that need block level access to the internal drive array? If not, why not just go with a fibre channel SAN?

If you need iSCSI:
I personally prefer Intel Pro/1000 T IP iSCSI adapters over the Adaptec ASA 7211c adapters (probably cheaper to source as well).  For the drive array, I like the Compaq RA4100 (again, they can be sourced cheap, and usually have the raid controller and GBIC in them). For the switch, I like the Brocade 2800 (or re-branded variants of them ie. Compaq SanSwitch 16, EMC DS-16B, etc). Use a Cisco 5420 to translate the iSCSI to fibre, and perhaps a Dell 5012 to plug the iSCSI adapters into (10 copper and 2 fibre connectors). SC GBIC's in all devies (Finisar 8519P-5A works good)
As follows: Adapter-to-Dell 5012-to-Cisco 5420-to-Brocade 2800-to-Ra4100

If you don't need block level drive access from remote locations:
Here I like the Emulex LP8000 gigabit fibre adapters (can be sourced cheap, Compaq also re-brands this adapter as well, but stick with Emulex, you can replace the GBIC's in those). Again, the Brocade 2800 would be the switch of choice, the RA4100 the drive array of choice. SC GBIC's in all devices should be all you need (unless you have some serious length between devices!) and the Finisar 8519P-5A brand works fine.
As follows: Adapter-to-Brocade 2800-to-RA4100.

Also, you can always add more RA4100's to the switch if you need more space or want to split up the SAN devices to specific use, etc. Honestly, there are many ways to make your setup. This has worked for me and is also the cheapest route I have found. More money, more speed/features.
taltomareAuthor Commented:
Fiber Channel SAN is out of the question given the intended budget, available resources and our current enviornment.  We will have to stick to a cooper solution.  I know the performance varence between the two is a factor for most situations but the iSCSI is the path that my company has decided on at this time.  Thanks for the recommendations.
 If you have a limited budget, iSCSI believe it or not is going to cost you a lot more for the hardware. The drive array being the biggest single cost, then the gigabit switch so you can plug multiple iSCSI cards into the array, then the cards themselves. Since we are talking about a 1 gigabit bandwidth limit, 1 gigabit fibre is an option. The _only_ reason to use iSCSI instead of Fibre would be to allow for external devices to access the array (say through the internet) for block level access (for example, you have a cluster, and one machine is remote to the internal network).

If you price the equipment (you could even eBay all the items) 1 gigabit Fibre Channel will always come out cheaper and the speed will be the same or better. Don't get me wrong, I love iSCSI, but it is usually used for a specific purpose (like the one I described) and not for an internal SAN solution (because Fibre is cheaper).

If you are commited to iSCSI, then the option I gave in the first reply is the cheapest route you will find that actually works (more or less because you don't have to purchase an expensive iSCSI only array).
Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

Disclaimer:  I'm a big believer in FC SANs, but that wasn't your questions :)

You stated that: "I know the performance varence between the two is a factor for most situations but the iSCSI is the path that my company has decided on at this time."  This is not going to be a high-performance solution, not even close...

$6,000 for a basic iSCSI SAN
$5,000 for the HDS SMS100 (6 drives, smallest config, single-controller)
 - PDF (
 - WEB (
 - NEWS (
$1,000 for nics, switches, cabling, ...

If you go for iSCSI nics with a hardware protocol accelration, cards will be much more but throughput goes up and CPU utilization on the client side won't spike when doing fast transfers.

Dual-controllers would give you redundancy on the storage side (and extra expense), but only if you use an iscsi driver that supports multipathing, or add-on multipathing software (extra expense).

JamesSenior Cloud Infrastructure EngineerCommented:
The information presented here about iSCSI SAN is not correct. If you were to go with a FC SAN depending on the number of Servers you have, you would require HBA cards for each Serve and a seperate switch. Where-as iSCSI SAN ties in with your IP LAN. This is were the cost savings begin. Also, iSCSI can perform at the same speed as FC SAN depending on your current hardware infrastructure.
Bearing in mind that the question was asked 23/11/07 what would you have suggested JBond2010?
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.