Link to home
Start Free TrialLog in
Avatar of LICOMPGUY
LICOMPGUY

asked on

Intel SATA SSD Vs SAS 15k 512n drives, RAID 5. Reliability, life expectency

Hey there

Specing out a new T640, small environment, single host for a client, always use 15k drives, 300s for esxi mirroed, RAID 5 900GB SAS, global hotspare.
The engineer from Dell was strongly suggesting going with Intel SATA SSDs vs SAS 15k drives.
Over 5 years ago, had an EMC SAN and was going through SSDs constantly - so I was curious to know what the experience is of other people.

Here is what he provided me with - what do you guys think?
"Here is the datasheet for the S4610 960gb SATA Mix Use SSD. Mean Time Between Failure (MTBF) is listed as 2,000,000 hours which is a little more than 231 years. Performance is listed as 51,000 Write IOPS and 96,000 Read IOPS. Lifetime writes are listed as 6PBW (Petabytes Written) which works out to writing about 3.25TB/day every day for 5 years PER DRIVE.
https://ark.intel.com/content/www/us/en/ark/products/134917/intel-ssd-d3-s4610-series-960gb-2-5in-sata-6gb-s-3d2-tlc.html

 

By comparison a 15k SAS drive can reasonably expect about 200 IOPS (not 200,000 but 200). Failure rate info is hard to find but I found an older Dell document listing as 1,200,000 hours (which really seems pretty high but it was the only number I found and is still 60% of the SSD number).Since SSDs have no moving parts (the solid part of solid state drive) they are much less likely to fail and in this case are roughly 255x FASTER at writing and 455x faster at reading data.  

Thanks guys!
Avatar of Member_2_231077
Member_2_231077

>always use 15k drives, 300s for esxi mirroed

Why not use SD cards?
Avatar of LICOMPGUY

ASKER

Hey Andy

So it sound like you agree.  I like the physical access for hot swapping, so won't go to SD cards, reduces the probability of down time
Most people would agree SSDs are the way to go, but you can halve the 3.25TB/day per drive in as much as two drives are written each time with RAID. There is some additional write amplification as well but at least there is spare capacity so TRIM is not needed.

5 years ago SSDs could not take the punishment new ones can take.
Hi Andy

I guess I misunderstood you then. Safe to assume, still can do a global hotspare, and hotswappable drive trays I imagine.
What about defragging?  Data corruption issues etc?

Thanks!
Avatar of Mal Osborne
What ever you decide, RAID5 would be a mistake.  When performance is important, and data is anything other than sustained, large file access, RAID10 is the way to go. If you write a small file to a RAID5 array, all drives in the array need to be read, parity calculated and written back. This is slow.

Also, with modern drive sizes, recalculating parity and rebuilding an array can be a protracted operation. With the flurry of activity doing a rebuild, sometimes a second drive can fail, leaving you screwed.

The only time I use RAID5 is for backup storage in a Disk-Disk-X setup, where performance is secondary to providing a lot of capacity cheaply,  and usage is predominantly large files.

Also, that is not how MTBF works, a device can have an MTBF of 2000000 hours, and a lifetime of 5 years. In a similar way, a healthy 25 year old human has an MTBF of something like 800 years.

I guess it is up to you to decide if the extra cost of SSDs makes sense in your environment. In recent server builds I have specced, 15K drives are difficult to justify. 10K are much cheaper and can be purchased in higher capacities, while SSDs are a little more expensive and significantly faster. 10K drives and SSDs just seem to have better "bang per buck" in most cases.
Also of course, if disk speed is important, make sure you order the server with a higher end RAID adapter. From where I see the Dell site, going from a base model H330 with no cache to an H740p with 8Gb is under $500USD, so pretty much a no brainer.
Hey there

Thank you for sharing your experience with me.   RAID is a requirement.  What my question was based on is smaller environments where they could be running a half a dozen VMs on a single host.  No supper intense  sizable databases with a large amount of IOPS.  Perhaps a proprietary application with a sql backend that is quite small, file server, DC etc.   RAID 10 would be wonderful but often not in the budget of the smaller companies so would go with RAID 5 with a global hotspare if RAID 10 is not affordable.   Also finding out that unlike spindle drives when the SSDs die they just die, no pre-emptive drive errors/notifications.  These are for environments where they may not have an IT person on staff so relying on visits perhaps weekly or bi-weekly and the eyes of a key employee and idrac/open manage warnings.  
So yes - SSDs seem to be far faster, but perhaps a little less forgiving than spindle drives at least for environments where they may not have someone on staff/in house managing servers for them.

We have been going with the H730s with 2 GB, but I have to see if there are options to up that a bit.  Do you feel bumping that up could increase performance?  I guess it may depend on the application.

Thanks again!
>when the SSDs die they just die

Electronics can just die of course but OMSA should give you a percentage wear, I think the iDRAC does as well.
Interesting, have to look into that.  I did get that info from a few Dell engineers that when SSDs go, they just go, whereas with spindle drives you can do a consistency check or get premptive failure info.  They said SSD failure rate seems low but they don't think there are a large percentage of SSDs out there yet so they can't tell if it id due to a small percentage of servers utilizing them or because they are more reliable.
I have to look into that Andy - thank you.
Esxi is small enough and the read/write volume is very low so I put it on a SD card or usb stick inside the server thus freeing up all of the drive bays.
Were you putting the Virtual machines operating system disks on the 300's and using the faster media for data drives?
Hey David

Yes ESXi certainly is small enough, but also means an outage should an SD fail, which there is never a convenient time for. That is what drives me to put it ESX on the 300s and use the larger 15k SAS drives RAIDED for the guests OS & Data Partitions.
All good information - thanks Mal, Andy and David!
Do not partition the 300GB disks in software, make two logical disks on them instead so that as far as ESXi is concerned they are separate disks, much easier to manage that way. That's sometimes referred to as a "sliced" array. You do know that Dell have a mirrored SD card (not hot-swap admittedly) and that ESXi keeps running even if you pull the SD card out?
Hey there

To be clear, even though it is an excessive amount of space, and the volume can be used for installers, or even I will put the backup vm on this volume.  The 300s are dedicated to esxi generally and are a mirrored set, with a global hotspare.  Is that what you were referring to?

Thanks!
I would only have ESXi on it and the spare space as a separate logical disk, so I would create a 10GB logical disk for ESXi and a 290GB one for local data such as templates. You configure that in the PERC BIOS or via iDRAC or any other PERC utility before installing ESXi.
Hey Andy

I have to check , but I believe that is what I have been doing. ;-)

Thank you
Use of spinning disk over 750GB in RAID-5 is now not recommended due to the risk of a second drive failing while the first is still rebuilding (ie total data loss), use RAID-6 or RAID10 instead (but be wary of Dell documentation describing RAID10 as Mirrored Stripes - it is as we all know Striped-Mirrors)
The argument that RAID10 is not affordable means you have not costed how much your data is worth or not looked at how much commercial data recovery will cost!

It can also be argued that with the increased reliability of SSD's, that the above restriction on RAID-5 is probably not valid!
This question needs an answer!
Become an EE member today
7 DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform.
View membership options
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.