Suggestions for Tier 1 Storage

Hi guys,

I currently have a "decent" (notice the quotes) infrastructure as my NAS at work, but have been tasked with looking up solutions for a Tier 1, VERY ROBUST, solution for a new SAN.

We started off with an IBM box with a 3ware RAID controller running on iSCSI LUNs.  It worked OKAY for a little while, but was never meant to extend as a FILESTORE (it was originally commissioned as an intermediate backup solution).

I am not against FIBRE, but currently have the iSCSI working okay, and if not required, would stay with that to get away from the cost of fibre.

Also, I am a big fan of SUN and love ZFS, so if you guys know of a good solution (that is VERY robust) using SUN, I'd love to hear it.
LVL 1
wii_injuryAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

giltjrCommented:
No solution, just a comment on iSCSI vs. Fibre Channel.

The biggest issue with iSCSI is that unless you have 10 Gigabit Ethernet, iSCSI is going to limit you in performance to 1 Gbps.  Even if you NIC team you still have performance issues.

Where as Fiber Channel you can do 1, 2, 4, 8, 10, or even 20 Gbps.
0
Paul SolovyovskySenior IT AdvisorCommented:
If you like SUN's ZFS you should take a look at the Netapp, it basically took the Netapp's WAFL File System and reversed engineered it (Netapp is suing them for it).  Netapp will act as a file server on the domain and will do dedupe.  The chasis support dual controllers and IBM resells it as the N-Series SAN.  Take a look at FAS2040
0
slemmesmiCommented:
Dear wiii_injury,

please let me add to paulsolov's comment, that NetApp FAS of course also support both iSCSI and FCP connectivity to LUNs, and "NAS" functionality for CIFS and NFS.
N.B. I do not agree with giltjr about FC being so much faster than iSCSI - it all comes down to your network infrastructure.
We run clustered mail infrastructure for +6500 users all mail servers via iSCSI (trunk of two 1Gbps NICs on mail server side, two 1Gbps NICs on NetApp side, and EtherChannels on Cisco switches) without any performance problems whatsoever.
I am certain that if you contact a local NetApp integrator, that they'll happily provide a POC (proof of concept) with iSCSI for you.
Furthermore - not to forget - using iSCSI you can utilize existing IP infrastructure and knowledge.
Last but not least - two killer points of NetApp are that it is VERY ROBUST, and the entire technology is based on KEEP IT SIMPLE.

Kind regards,
Soren
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

giltjrCommented:
O.K, let me expand a bit.  You could have more performance problem than you have in a FC environment.

With 1 Gbps Ethernet, your maximum is  125 MB/ps.  Even with dual NIC's because of the way NIC teaming works. you are still limited to 125 MB/ps.

If you were to only get a 2 Gbps FC environment, because of the way FC works you maximum throughput is 200 MB/ps.  If you go with 4 Gbps FC, you have 400 MB/ps and if you were to go to  8 Gbps FC, you have 800 MB/ps.

The main attraction to iSCSI is most sites just plop it down and put it on their existing Ethernet network.  They may create a VLAN to try and isolate the "disk" traffic from "normal" network traffic.   The problem is if you have your disk traffic on the same physical paths as your normal traffic you have drastically increased your network traffic.

In order to reduce the load on your "normal" LAN you would need to build a physically separate LAN.

Now will every environment have a problem.  No, it depends on your environment.  However, simple math shows that 2,4, 8 Gbps are all faster than 1 Gbps.  If you have a high volume I/O enviroment 2 Gbps and higher FC will out perform 1 Gbps Ethernet every time.

We have some iSCSI SAN's, however they are in low use environments.  In fact one the server is directly connected to the iSCSI SAN and is used for e-mail archiving only.

Soren, just  a note on yoru setup.  Just to make sure you know that Cicso's Ethernet channel setup (actually all technology like that) used a single interface to pass all traffic through for a TCP connection.   This is so that packets will never be received out of order.  So even though you have dual NIC's all the way through, the maximum throughput for a single target is 125 MB/ps.  

Although the title of this deals with iSCSI and VMWare, you can see how iSCSI (in a 1 Gbps network) even with multiple NIC's is still much more limited in term of total throughput than FC.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Paul SolovyovskySenior IT AdvisorCommented:
@giltjr

I disagree.

1.  With etherchannel we can aggregate up to 8 1GB nics
2.  10GB nics are way cheapter than fibre
3.  Netapp did a study with vwmare and the difference was under 10% for iscsi, FC, and NFS for storage.

For the cost of fibre cards, switch, etc.. I would get more spindles but it all depends on the environment.  Unless you're doing a lot of transaction intensive processing you're not going to see a really large benefit of FC over iSCSI.  I've done both on Netapp, HP, etc.. and in most cases you don't go over 1GB
0
giltjrCommented:
Must have missed paulsovol, comments.

1) This gives you more bandwidth in a way, but not really.  With etherchannel all traffic between the same MAC addresses must traverse a single physical link.  So depending on where  the etherchannel is setup, you still  may be limited to 125MB/s of I/O througput.

2) I would have to look at the total cost of of all networking related equipment.  I'm not sure of the total cost between 10 Gbe vs. say 4 or 8 Gb FC.  If you already have 10 Gbe, then it should be a no brainer.

3) For some environments you will not see a big benefit of FC over iSCSI, for others you will.   As you say, it depends on your transaction rates and what your SAN can do.


In our environment we do all of the heavy I/O on IBM mainframes.  Although currently we only have 4Gb FICON, we have 6 of them.  Because of the way I/O works on the mainframe and in z/OS any I/O request to any disk volume can go over any of the 6 paths.  In fact the response does not need to come back on the same physical path as it went out on.

The other issue you have with iSCSI is not a technical issue, but a people issue.  In some places the networking group does not do anything with the FC setup.  The "DASD/SAN" people handle that.  When you go iSCSI (or even FCoE)  now all of sudden you move some of the responsibility from one group to another.  This can cause problems.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Storage

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.