Link to home
Start Free TrialLog in
Avatar of beaconlightboy
beaconlightboy

asked on

ISCSI or FIber

Ok, We are going to be virtualizing our entire system here over the next year and we obviously need a SAN.  I engaged Dell (equalogic) and IBM (DS4700) to provide me with solutions.  Now i have worked the vendors over for about 3 months and have them practically giving me the stuff.

both solutions meet the following needs
- 16TB of RAW space
- Primary box at main datacenter all 15K 450GB drives
- Second box across campus all SATA drives
- Both solutions provide snapshots/volume copy and replication.

The IBM solution is fiber and is obviously going to be faster on the network side.  It has dual controllers that have 4 4GB ports each.  So that's a total of 32GB's of controller thruput.  Each server will have dual 4GB NICS.  It also replicates to the remote box via direct fiber connections.

Now... the IBM solution doesn't have a nice interface.  It sucks.  It also doesn't provide any reporting or trending tools with the unit.  It doesn't automatically auto tune itself (i.e. add spindles as needed)  It requires knowledge of fiber channel san networks that i don't have any experience with.  it does however allow me to add drives individually and of the SATA or FC flavor.  The expansion bays are affordable as opposed to buying a full equalogic box.

On the other side..
The equalogic solution provides us with two controllers to the IBM one, and since it is failover only on their controllers, you get 4GB per box, so that's 8GB of total controller throughput.  The servers all have 4 1GB ports for a total of 4GB per server.  There is a significant difference in bandwidth in the two products.  Now I keep hearing about MPIO and i'm no expert on it, but heard that even with MPIO the SAN can only talk to one nic port at a time.  If that's true, then the bandwidth needle on the equalogic just went down.  But they say MPIO allows you to use all the NICS to get 4GB per server if the box has 4 1GB NICS.

With all that being said.  The IBM solution is only 10K more.  I won't give out numbers but i can tell you that if you know how to work deals you can get fiber for almost the same cost as ISCSI, at least in the Entry/Mid range market.   Anyways, this may seem like a no brainer but i just wanted to see what the experts think about my situation.  Especially with 10GB NICS coming out.  the equalogic is supposed to support that upgrade.  I fear that i am losing a lot of administrative functionality and ease of use that in a small shop is important.  I also can't afford to be wrong and end up with a product that will be 'IO'd out' if you know what i'm saying.

any help would be appreciated
Avatar of beaconlightboy
beaconlightboy

ASKER

Oh an note each vendor is tellilng me the others technology is going away.  Dell says FC is going away and IBM says ISCSI is going away.  I doubt either of them are going anywhere, anytime soon.  Oh and just in case it matters, we will be virtualizing our SQL and Exchange servers.  Dell says not a problem on ISCSI, but IBM says na na.  Fiber best for SQL and Exchange.  
SOLUTION
Avatar of Duncan Meyers
Duncan Meyers
Flag of Australia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Didn't we forgot that SSD rules the IOPS world ?

One single Intel X25-E SSD can sustain the IOPS load of 13x 15krpm HDD !
So you only need 4x Intel X25-E SSD !

Did you look for 2U based white boxes (with 4 hours onsite service support) with OpenFiler like that :
  • 2U rack allows up to 24x 2,5" (like the SuperMicro SC216) or 12x 3,5" hot swap drives
  • All SATA config below are using Enterprise class drives with a 1 per 10E15 UBE/BER
  • All SAS config below are using Enterprise class drives with a 1 per 10E16 UBE/BER
  • "Max Capacity (12TB RAID 10 or 16TB RAID 60)" storage server using 12x SATA 2TB
  • "Capacity (6TB RAID 10 / 1900 iops)" storage server using 24x 2,5" SATA 7.2k 500GB $5k
  • "Capacity (3.6TB RAID 10 / 2200 iops)" storage server using 12x SAS 15k 600GB $8k
  • "Capacity (3.6TB RAID 10 / 2900 iops)" storage server using 24x 2,5" SAS 10k 300GB $10k
  • "Mixed (4TB RAID 10 + 1.7TB RAID 5/ 5000 iops)" storage server using 16x 2,5" SATA 7.2k 500GB + 8x OCZ Vertex 250GB $9k
  • "Mixed (3TB RAID 10 + 2.5TB RAID 5/ 7000 iops)" storage server using 12x 2,5" SATA 7.2k 500GB + 12x OCZ Vertex 250GB $11k
  • "IOPS (5TB RAID 60 / 12000 iops)" storage server using 24x OCZ Vertex 250GB $17k
  • "IOPS (3TB RAID 10 / 15000 iops)" storage server using 24x OCZ Vertex 250GB $17k
  • "IOPS (3.2TB RAID 60 / 35000 iops)" storage server using 24x Intel X25-M 160GB $15k
  • "IOPS (1.9TB RAID 10 / 50000 iops)" storage server using 24x Intel X25-M 160GB $15k
  • "IOPS (1.3TB RAID 60 / 40000 iops)" storage server using 24x Intel X25-E 64GB $19k
  • "Max IOPS (0.7TB RAID 10 / 55000 iops)" storage server using 24x Intel X25-E 64GB $19k
For the fun of it, just look at the price offered by your nice brands !
No, BigScmuh, I didn't forget.

>One single Intel X25-E SSD can sustain the IOPS load of 13x 15krpm HDD !
>So you only need 4x Intel X25-E SSD !

Yes - absolutely correct . However there is a world of difference between personal storage flash drives and enterprise flash drives. Flash is the technology that will overtake SCSI and FC, but it is still relatively expensive, and an apropriately configured storage array with conventional disks will provide the performance required with plenty of space. I am quite convinced that we'll see a massive decrease in deployment of Tier 1 disk (15K and 10K FC and SAS) within the next 18 months to two years to be replaced with flash drives. Smart arrays will have a layer of Tier 0 flash drives and Tier 2 SATA drives and smarts in the array to move blocks in and out of flash as host perfformance requires. EMC already does this in their new high-end arrays (the Symmetrix V-Max) and will release the same technology in their CLARiiON arrays soon.

Your whitebox solution is nifty, but you end up with a box that you have to support yourself - Intel, OCZ and the good folk who have developed OpenFiler in their own time won't get out of bed for you at 2:00AM to resolve an issue with lost data.
Rather - you have to weigh the business risk against the low price. It's one of virtualisation's complicating factors - once you've got 20, 30, 40 or more servers relying on the physical hardware and it fails, the cost of lost data and lost time and recovery can easily outweigh the hardware savings.
Regarding 24/7 support, OpenFiler has an offer <$3k per year per node and even white boxes are covered by a 4 hours onsite service contract (That is very common service)

Regarding investment cost, SSD are cheaper (IOPS world) because you need a lot less SSD than HDD...and when you need more capacity go to the SATA world.

Regarding production cost, 1 SSD is about 3W where 1 HDD is between 20-30W, just evaluate the annual power bill reduction and you'll have some bucks to invest in more service.

Regarding the business risk, now you saw you can have a RAID 10 + hot spare + serviced white box at 1/3 price (at worst), you can buy some spare servers too...

Least one is reliability : SSD reliability statistics are not old enough to be really confident with...but they looks great (No moving parts)
Thanks for the feedback guys.  I did measure my IOPS for the systems.  I just listed everything in space cause that's how they sell the systems.  Compared to the systems you guys work on, mine is just a tiny thing.  Our exchange average IOPS are 20.  We only have 400 users.  I can only afford so much - so I spec'd out the best price/spindle qty i could for each vendor.

Could you explain this in more detail | Disk Service time = % Disk Time / Disk transfers/sec.  I'm not getting that formula.  How do you determine response ms from this?  doesn't make sense to me.

ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Great postings guys.. can you just clarify for me the different between FC disks and SAS disks. I thought FC referred to the transport mechanism from host to SAN, and the disks themselves were either SAS, SATA or SSD. rgds Simon
At it's core, a disk is a disk is a disk (SSD excluded).  Same basic principle, spinning platters, read/write heads.  SAS/SATA/SCSI/FC are all simply different protocols for the physical drive to communicate with the storage array.  SSD is an architechtural departure, however, since it is solid-state (just flash memory, no spinning platters, no read/write heads).  However, they make SSD drives with many different protocol interfaces.

Different protocols allow for different speeds, & support different features (like hot-swap, for instance).  In general, the old adage still holds true:  You get what you pay for.  FC drives are designed to be enterprise-class drives, meaning higher MTBF, etc etc.  SATA drives, OTOH, are (generally) designed for use in end-user-level stuff, like Acer workstations.  LOL
> SAS/SATA/SCSI/FC are all simply different protocols
That's partially correct. The underlying protocol for SAS, SCSI and FC is SCSI. If a disk has the same spin speed, you'll get the same performance. SATA/ATA is a different kettle of fish. They have a lower spindle speed at 7200 rpm (WD Raptors are an exception here) and don't have the same on-board smarts as SCSI. For example, a SCSI/FC/SAS enterprise-class drive has two ASICS on the controller boar. One handles I/O, the other handles head tracking. A SATA drive has a single ASIC that does both. SCSI has Tagged Command Queueing and Tagged Command Re-ordering that allows the drives to get clever and re-order commands so that they're handled in the most efficeint way possible. SATA-II has Native Command Queuing, a subset of TCQ. SATA and ATA have no command queueing at all. If you're interested, Seagate had an excellent white paper: 'More than an interface  SCSI vs. ATA. By Dave Anderson, Jim Dykes, Erik Riedel. Seagate Research'. YOu can find it here: http://pages.cs.wisc.edu/~remzi/Classes/838/Fall2001/Papers/scsi-ata.pdf - it explains why enterprise drives are more expensive, and has some great insights into disk technology in general.

I can't see the SCSI protocol going anywhere anytime soon - servers have to have some standard method of communicating with storage and SCSI does the job pretty well.
WD started to put 2 cpu in their Sata drives...but one may keep its blind eyes on it
Does anyone know some affordable tools that can be used to monitor a SAN.  something similar to profiler but doesn't require me to cut off my left arm.
dlethe is an expert here at EE - he's involved (I believe) with the development of a SAN management tool: http://www.santools.com. It looks pretty groovy. Other options include Symantec's CommandCentral Storage (although that may be one that requires your right arm as well...). EMC has EMC ControlCentre, IBM has their own as does NetApp, Brocade has a tool, so does BMC and CA. NetIQ also has some funky tools. I suspect that all those will require bits and/or pieces of your anatomy...
Disclaimer: I've set up and used both CommandCentral Storage and EMC ControlCenter - they can both do some pretty powerful stuff, but in most instances, after a honeymoon period, they've fallen into disuse.