Link to home
Start Free TrialLog in
Avatar of mrmut
mrmutFlag for Croatia

asked on

Consumer SATA SSD disks for server RAID?

Hello all!

I am building a server for a company, and I have wandered about putting consumer SSD drives in server raid 5 with (say 2?) hotspares.

Any word against it?
Avatar of John
John
Flag of Canada image

I would say that is not a good idea. As it is, SSD drives can be subject to catastrophic failure and consumer devices even more so. Some commercial SSD drives have been designed to be self repairing and so are more robust.

I would only use commercial drives.
Avatar of mrmut

ASKER

Any empirical info?

I intended to build RAID 6 with two or three hotspares.
The data I have is from reading articles and vendor specification sheets. I have not seen a summary.

Here are some articles of interest on the topic.

http://www.extremetech.com/computing/142096-self-healing-self-heating-flash-memory-survives-more-than-100-million-cycles

http://www.zdnet.com/self-healing-flash-for-infinite-life-7000008182/

As you can imagine, such drives will be more expensive (read: commercial) so I would not use consumer drives (read: cheap at the expense of quality)
Avatar of mrmut

ASKER

That is fine, but I am interested what happens to consumer SSDs when they are _used_, not marketing pitches.

As is, of all the companies I manage, only one has SCSI / SAS based server, and non has any problem. I am trying to verify the same for consumer SSDs.
Avatar of Robert Retzer
In a server environment I would never use ssd drives, you are asking for trouble if you do use these drives. As John mentioned they are subject to be subject to catastrophic failure, if these drives fail there is no way to recover data, not like a hard drive that has platters you can recover data from dead drives by taking to a data recovery place. Even using a raid system you still could be subject to catastrophic failure
Avatar of mrmut

ASKER

Thank you for your reply web_tracker. - Could you please explain what kind of catastrophic failure are you mentioning in the case of R6 array with 2 drives + 2 hotspares redundancy?
ASKER CERTIFIED SOLUTION
Avatar of John
John
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
@mrmut - Thank you and good luck setting up your server.
Avatar of mrmut

ASKER

Thanks to you two :-)

To be honest - the main point has been the firing line. - It is not such a good idea trying to save a few bucks for the client, and than risk a bigger problem down the road.
That usually does the trick :)

It help to remind experts that their clients/employers will hold them accountable for the successes and failures of the solutions they recommend.
I have two identical servers running.
windows 2012 R2
Gigabyte Ga-990FXA -UD3 rev. 3.0
FX-8350
32GB DDR3 2100
2x 500GB SSD (RAIDED through RocketRAID SATA 6.0 controller)

Both runs with no downtime for about 2 years now, no single down time. Most of recent desktop motherboard has very high quality metal capacitors(all, not a few like in the past). So motherboard is much stable than 3-5 years ago.

I run 6 virtual guests on each server. So, when something happens to one of them or to update, I move guests to the other server. Works cool.

The problem is, consumer SSD drops huge IOPS and speed almost 10 minutes later after virtual machines run. The advertised IOPS are for peak usage, not sustainable IOPS. After 30 minutes later, the IOPS will drop to 15%-30% if you run server applications and virtual machines on consumer SSD. But, still, it's faster than mechanical disk in my opinion.

Good luck.
Avatar of mrmut

ASKER

Thanks a lot for updating on an old thread!

In the meantime I have decided not to move to consumer (or server) SSD on any on my servers. The problem is the resource overhead that I would need to create and maintain to ascertain that I would be able to recover failed system. - An expense I would not be able to justify. RAID n-drives + hotspare + backup drive of the same type + 2 x backup drive + ...

SSD, at least in my experience up until now, proved having most effect on user's machines, not on servers. Now, I am sure it would be great to have SSD's on servers, but with current prices that is not an option. This is especially true, give that normal server HDDs are peanuts price. For example, RAID 10 made from 6 600 GB Velociraptors (WD Enterprise SATA) is very cheap, extremely robust, reliable and works fantastic.