Suitable disk subsystem for a high IO request - from MS SQL server.

Hi!

We are looking for a suitable disk subsystem, for an MS SQL server 2005. It works under the MS Dynamics AX ERP system.
The ERP team requested a huge IO rate: 8200 IOPS for random write.
The current disk subsystem, an HP EVA 4100 with 56 disks, seems to be not enough fast.
The average read response time = 5ms.
The average write response time = 70ms.
Details: DATA vdisk with 32x 15k disks in VRAID1, LOG vdisk with 8x 15k disks in VRAID1, TEMP vdisk with 16x 15k disks in VRAID1.
The DATA and LOG disks are dedicated disks, the TEMP disks are shared.

Which disk subsystem (storage) is able to handle such a huge write IO request?
Which one could you recommend us for a longer term - we are expecting fast-growing transaction number.

Gabor
autonetimportAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

BigSchmuhCommented:
A raw IOPS evaluation gives the below results (no redundancy):
-Using 15k SAS HDD (Accounted for 180iops each) = 46 drives
-Using Intel X25-E or Ocz Vertex 2 Pro SSD (13x faster than a 15k SAS)  = 4 drives
     (cf http://it.anandtech.com/IT/showdoc.aspx?i=3532&p=11 )
     (cf http://www.anandtech.com/storage/showdoc.aspx?i=3631&p=22 )

Adding redundancy means, you may loose 50% read IOPS if you RAID HBA is not capable of balancing the IO on the 2 drives of every mirror.

Now, regarding architecture, I would go with TWO {DAS on a single server} active/passive because it is simple to build to maintain to use under the stress condition of a disaster recovery plan (and usually  it's even cheaper)

Although, I would stay with some SAS drives for Logs, Backups, Archive usages

My "<80k" reco for a HA capable dual DAS server :
-Go with a dual socket 2U server with up to 24x 2.5" hot swap HDD (Ex: SuperMicro SC216)
-Use Infiniband HBA and switch to gain a large improvement in io indirect latency (faster than fiber)
-Use 2 Raid 24 ports HBA per server and connect only 12 ports to single interfaced drive (way cheaper than double interfaced ones)
-Use 8x Intel X25-E or Ocz Vertex 2 Pro SSD + some 2.5" SAS drives
-A dual socket MB using Intel Nehalems/Westmere or AMD Magny-Cours if you have some CPU challenge
-A dual socket MB using 18x DIMM slots for a 18x4GB=72GB RAM (8GB sticks are still too expensive) server just to ensure lowering down the IOPS level while raising the performance
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
autonetimportAuthor Commented:
All disks are FC disks.
What about an update of the current storage EVA 4100 (e.g. to an EVA6100 or EVA8100)?
0
BigSchmuhCommented:
No affordable SSD with a FC interface (neither a dual FC interface) exists up to my knowledge.
==> So instead of 8x $725 for 256GB ultra fast iops, you have to stay with 92x 36GB 15k dual port FC-AL $100 drives (About $3k more + a $2k additional power bill per year)

May be it is time to say your boss that staying on old techs cost money...
0
The Ultimate Tool Kit for Technolgy Solution Provi

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy for valuable how-to assets including sample agreements, checklists, flowcharts, and more!

andyalderCommented:
I have to agree with BigSchmuh on this one, SSDs for the EVA4400 are about $10K each for 72GB, you're better off with localy attached SSDs (for DL380 $2K for 120GB). Or use fusion-io ioDrive for even more speed. You can replicate easily enough within SQL instead of using shared storage for clustering.  

But you haven't told us how much data you're talking about so maybe it won't all fit on SSDs.

You can certainly upgrade the EVA wih more enclosures and loop switches if you really have enough data to store if you want to add more disk spindles and get the IOPS the traditional way.
0
BigSchmuhCommented:
Example budget for a 2U server using DAS :
-SuperMicro SC216A with 24x hot swap 2.5" SAS/SATA drives : $1100
-MB dual socket 1366 with 18x DDR3 X8DAH+-F and 5x PCIe x8  : $480
-2x CPU Nehalem : E5520-2.26Ghz-80W $390 OR X5550-2.66Ghz-95W $1000 OR W5590 3.33GHz-130W $1690 = $2000 (2x X5550)
-18x DDR3 1333Mhz ECC Reg : 6x "3x4GB Wintec 3SR34550K-13" DDR3-1333 $400 = $2400
-3x Raid HBA LSI 9240-8i (8 internal ports no cache) + Cables : 3x $270 + Some cables = $1000
-1x Interconnect HBA : $1500 (dual 10Gbs or dual Infiniband 10Gbs)
-12x SSD Intel X25-M g2 160GB : 12x $400 = $4800
-12x SAS 15k 146GB : 12x $540 = $6480  OR 12x SAS 10k 300GB : 12x $300 = $3600

==> Results range from $17k to $22k per 2U server (no shipping no 24-7 service)
-Dual Nehalem CPU
-72GB
-1x RAID 10 array (8x SSD) 640GB redunded for highly intensively accessed data
-1x RAID 5 array (4x SSD) 480GB redunded for other data and temp
-1x RAID 10 array (8x SAS) 600GB redunded for Logs and Swaps
-1x RAID 5 array (4x SAS) 450GB redunded for Backup & WORM usage
-10Gb interconnection capability
0
BigSchmuhCommented:
I recommend to PAQ the question::
-accepting BigSchmuh #28981202
-assisted by BigSchmuh #28988943 and andyalder #29011188
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Storage

From novice to tech pro — start learning today.