Link to home
Start Free TrialLog in
Avatar of ged125
ged125Flag for United States of America

asked on

What is the best way to increase disk througput?

I recently purchased a Tyan GT20 B5375 barebones server (see link below) to run as an ESXi server.  I have outfitted it with two Quad Xeons and 16GB of RAM.  At the moment, it appears that disk througput is the biggest bottleneck.  I notice a considerable slow down when doing large backups or clones.  It has two SATA 300 drives.  Is there a RAID controller or other piece of hardware that I could purchase to increase disk performance?  Also needs to be compatible with VMWare ESXi.  Thanks in advance for the help.  

http://www.tyan.com/product_barebones_detail.aspx?pid=366
Avatar of stany0
stany0

RAID 0 in RAID 1
Well, there's always faster (10K rpm) drives, but I'd recommend a SATA RAID controller, running RAID 1 (mirroring). More reads plus redundancy. Definitely a RAID controller, though - more set up for multiple reads at once.

And do your backups at night. :)
"RAID 0 in RAID 1"? Huh?
I hate RAID 0. There may be a little speed gain, but if you lose one drive you lose them both. With RAID 1 if you lose one drive you lose nothing. Storage is cheap enough that I definitely would sacrifice space for redundancy.
Avatar of ged125

ASKER

Any specific RAID controller in mind?  The bult-in RAID controller on the motherboard is not recognized by VMWare ESXi.  
@briandunkle
RAID 0 in RAID 1 refers to RAID 0+1, which provides the redundancy. if a drive dies, you are still OK. Actually, a second drive could die and you still might be alright (but you might not be). Either way, it still provides at least the same redundancy as RAID 1
Ah. I just call that RAID 10, maybe incorrectly.
I was going off of the two drives thing, but yeah, if you have room for more, that's a good way to go. But then the popular choice is usually RAID 5. RAID 5 is voodoo, though. :)
Avatar of ged125

ASKER

I need a specific hardware recommendation, I haven't been able to find something that works in this chassis, is compatible with ESXi and will increase my performance.
Agreed. RAID 5 is usual;y good enough for most applications. RAID 6 is better if possible, but *usually* it's not needed until you get into the medium to large or enterprise level.
Single Ch U320 SCSI Host Bus Adapter
also use RAID 1+0
Avatar of raj27962
Any parity stripping solution will incure an overhead i.e. slowdown due to having to work out parity and write it, avoid raid 5 and especially 6 if you want to solve your disk performance problems your thinking the wrong way, use iscsi instead, have the isci target as other servers/a nas etc, i found out that instead of getting more sas disks/scsi disks etc for my servers it was easier to get a couple of external nas's with gig links, mirror them for redundancy and present them to the vmware host as iscsi targets, i saw a 10x increase in disk performance !
Assuming you are not looking for an external storage solution, I think you can only use your 4x 3,5" SATA II hot swap bay on your ICH9R embedded raid controller.

Now, with 4x SATA drive as a maximum, your best solutions are (in desc price order) :
a) RAID 10 on 4x SSD drives (for 64GB, 128GB, 240GB or 500 GB usable)
Please limit your SSD choice to :
- Intel X25-E SLC 32GB $400 (or 64GB $800) if you expect an intense random write io pattern
- OCZ Vertex MLC 120GB $400 (or 250GB $800) otherwise

b) RAID 10 on 4x WD Velociraptor (for 300GB or 600GB usable)
WD Velociraptor are 10000 rpm drives and delivers the fastest random io in the SATA world.
They stores 300GB $230 (or 150GB $180).

c) 2 drives only
-2 drives in RAID 1 on SSD
-2 drives in RAID 1 on WD Velociraptor

d) RAID 10 on 4x SATA 7200 rpm drives (for 1TB, 1,5TB or 2TB usable)
Using Enterprise edition of SATA drives of 500GB 750GB or 1TB allows a good reliability-price-performance mix ratio but may not be sufficient to support your random io pattern.

e) Combination of the above solutions
-2 drives in RAID 1 on SSD + 2 drives in RAID 1 on WD Velociraptor
-2 drives in RAID 1 on WD Velociraptor + 2 drives in RAID 1 on SATA 7200 rpm


Reco)
Look at :
-your storage space needs
-your available budget
0/ Go with the (a) SSD or (b) if you can afford it, (c) allows a new ESX installation on the 2 new drives while keeping the 2 actual drives for some storage.
1/ Definitely forget about raid 5 or 6 arrays with so few drives
2/ Define your array using 64KB stripe size
2/ Align your VMFS partitions to a 64KB multiple : cf http://www.vmware.com/pdf/esx3_partition_align.pdf
4/ In the VM machine, format your partition using 64KB page size


I will)
If I don't need more than 200GB usable and have $1600, I'll go with 2x 250GB OCZ Vertex in raid 1 and use the 2 actual drives for disk backup.
ASKER CERTIFIED SOLUTION
Avatar of Callandor
Callandor
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I understand the ICH9R embedded controller is not supported in AHCI/RAID mode on any ESXi release.

For the raid 10 usage I described above (considering your 4 drives maximum), you may replace the ICH9R with a low profile PCIe card like the officially VMWare ESXi supported Adaptec 2405 which cost about $210.

Don't forget to add :
  • a "SFF-8087 to 4x SATA" cable ($20)
  • a M2083 Tyan riser card ($30)
Avatar of ged125

ASKER

Wow, thanks for all the incredible feedback.  At this point I am going to order an Adaptec 2405, riser card and cable.  I eliminated the Areca 1210 due to comments on NewEgg's website about them not being compatible with the Tyan GT20 tank.  I will update the thread in a few days when my equipment arrives and I have had time to install and validate.

Thanks again for all the advice.
I guess someone wasn't happy with the Areca, while another said "It works fine in the Tyan GT20 with the S2865 motherboard although you'll need new longer SATA cables."  Hard to tell why it wouldn't work for one person versus working for another (both were the same motherboard).
Avatar of ged125

ASKER

I'll have to review the two again.  I am placing the order in the morning.  Is there anything that sets the Areca 1210 apart from the adaptec 2405?  Besides the extra $100?
The Adaptec supports RAID 0, 1, and 10.  The Areca supports RAID 0/1/1E/3/5.  For RAID-5, you need a parity processor, which probably accounts for the additional cost.
Avatar of ged125

ASKER

Callandor - What do you think of the Adaptec 3405?  It seems to be the equivalent to the Areca.  I feel more comfortable going with Adaptec since I have used them before, but I don't want to sacrafice on performance.
You have a serious performance loss of using RAID 5 instead of RAID 10 with only 4 drives.

Apart from that, any ESXi supported card works.

I pointed out the Adaptec 2405 because :
-it is clearly and officialy VMWare ESXi supported
-it fits any common riser card in a 1U server
-it looks not too expensive
-it can handle 4 SSD drives at full speed (800Mhz io chip on board)
-it does not support the performance killer and expensive raid 5

One point : I don't know if the 0.5m SFF8087 to 4x SATA cable included in the 2405 package will be long enough for your server
Considering the 2405 Adaptec ($240 with the PCIe riser card) with 4 drives maximum, I updated the "best solutions" to render the SAS drives capability (in performance / price ratio order) :

a) Random read/write intensive io pattern using 32GB SSD + 300GB SATA usable for $800 (option 32 to 64GB for another $800) using RAID 1 on 2x SSD Intel X25-E SLC drives + RAID 1 on your 2 current SATA drives

b) Random read intensive io pattern using 120GB SSD + 300GB SATA usable for $800 (option 120 to 250GB for another $800) using RAID 1 on 2x SSD OCZ Vertex MLC drives + RAID 1 on your 2 current SATA drives
==> On the SSD array : 10x times slower random write than (a)
==> On the SSD array : Same random write than (c)

c) High end 1U server random io pattern using 4x 15krpm 147GB SAS drives for 300GB usable at $750 (including a SFF8087 to 4x SAS drive cable)
==> On all the array : 10x times slower random read than (a) and (b)...but 100% faster than (d)

d) Normal server random io pattern using raid 10 on 4x WD 300GB Velociraptor (including your 2 current drives) at $460
==> 50% slower random io than (c)


Reco)
Look at :
-your random io usage needs
-your budget

0/ Go with the (a or b) solution if you can stay below 200GB usable (+your 2 current drives as backup)
1/ Definitely forget about raid 5 or 6 arrays with so few drives
2/ Define your array using 64KB stripe size
2/ Align your VMFS partitions to a 64KB multiple : cf http://www.vmware.com/pdf/esx3_partition_align.pdf
4/ In the VM machine, format your partition using 64KB page size
The Adaptec 3405 looks like a good performer with a lot of options to choose from, as far as configuration: http://www.atomicmpc.com.au/Review/90087,adaptec-3405-sas-raid.aspx.  It would probably go well with your hardware and ESXi, and it will also support SAS drives, which would also increase disk performance (though they do cost more).
Ok yes anew controll would help however you need to evaluate other things as well Number one Backups - Sorry to say but a lot of backup software drag a system down no matter how fast the hard drive system the backup will load it as much as it can and the same exists for the cloning the computer will try to complete the task as fast as it can and bog the rest of the system down.. sorry just the nature of the beast.  to get around the backup problem you should think seriously about going to a CDP type solution, backups are made in small increments throughout the day in smaller chunks thereby reducing the overhead of the backups. As far as cloning no help there just try to do it at a slow time of the day(or night).  Another thing that would help is more disks in a raid array the more spindles you have the faster the system is going to transfer data.. Raid 5 or raid 10.  Raid 5 will be the least expensive as you only lose 1 disk per set for redundancy and you can start with only three disks raid 10 is faster but you will need an even number of disks and lose half of the usable space due to the mirroring.

Good luck with either way
Adaptec 3405 is $90 more than the 2405 just to get a RAID 5 usage that will slow down your random io write performance.

More info about the "read-modify-write" cycle at Wikipedia RAID 5 "RAID 5 Performance" below
The read-modify-write cycle requirement of RAID 5's parity implementation penalizes random writes by as much as an order of magnitude compared to RAID 0
Yes raid 0 is faster but offers zero data protection raid 5 is slightly slower than raid 0 but if you lose one hardrive you still have all your data and keep running ..you chose which - super perfomance or being able to keep running (with reduced performance however until drive is replaced and set restored) if a hard drive fails....
@lesterpenguinne: reading my above post would have allow you to understand I am recommending raid 10.
The wikipedia raid 5 article points out the "read-modify-write" cycle which render raid 5 arrays 10 times slower than raid 0/1/10 on random write io.

Most server io usage patterns are relying on random io performance and are not very much concerned of sequential io (where raid 5 is good at)
>The wikipedia raid 5 article points out the "read-modify-write" cycle which render raid 5 arrays 10 times slower than raid 0/1/10 on random write io.

That might be an exaggeration - I agree RAID10 is better in performance than RAID5, but not 10 times, unless you are comparing a cheap motherboard RAID5 setup. Here is a test done on an Areca 1220: http://www.xbitlabs.com/articles/storage/display/areca-arc1220.html. I agree that if one can afford the disk space, RAID10 is the way to go for best performance.
I agree RAID 5 "random write io" can render a bit better than "10x slower than RAID 10" (let say 2x slower) IF (and only IF) the hw raid card handles "write back cache backed by battery"...and the battery pack is another $100.
Avatar of ged125

ASKER

After further research I found that the Areca card was consistently rated highest among it's competitors.  I purchased the 1210 and got it running on ESXi.   Throughout is much better.   I can clone a 10gb VM in about two minutes.  

Thanks to everyone who participated!