Link to home
Start Free TrialLog in
Avatar of Bluelude1
Bluelude1Flag for United States of America

asked on

Do I need Enterprise Class Harddrives in a Non Mission Critical Server ?

I have a SBS 2003 Server running in my office for about 3 users. It basically controls my Exchange accounts, Quickbooks & shared storage of various files. I currently have 2x Seagate  250GB Enterprise Class hard drives running in a Raid 1 configuration with 400GB  WD installed for backup and I am slowly creeping toward capacity on the Seagates.

I was wondering if there is any reason to spend 2x the cost of desktop drives for enterprise drives with this amount of redundancy if I need to upgrade in the near future ?
SOLUTION
Avatar of exx1976
exx1976
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Cliff Galiher
Quick answer: ****HIGHLY**** recommended. Desktop hard drives are designed with desktop use in mind. AKA, they aren't rated to be spinning 24/hrs a day because most desktops employ power management to spin them down. Wear and tear, thermal stress, and other factors come into play at that point. One of the more common service calls I used to go on when I still had to do hardware service calls myself were for "home built" or whitebox servers where the system builder skimped on hardware to save a buck, or worse, increase margins while giving the customer the shaft.
Regardless, the point is, desktop drives have a very high failure rate when used as server drives simply because of the harsh environment that they are being expected for perform in well outside of their design parameters. Even if the system isn't "mission critical" I assume that the data is important (quickbooks, etc) and the time you'll spend recovering from a failure would pay for the difference in drive costs.
-Cliff
 
I don't think that you need particularly enterprise class drives if you use good backup strategy. Even enterprise class drives fail when you do not expect that.
IMHO WD Black Caviar drives in RAID1 mirror plus backup to external storage are enough for mission non critical server.
But these drives by WD are good: http://www.wdc.com/en/products/Products.asp?DriveID=503 
Even if the server is not mission critical - is it business critical?
Avatar of Bluelude1

ASKER

I am a firm believer in preventative planning, but I was trying to figure out where I hit the point of diminishing returns.
Well, just look at it this way.


If the drives failed, and you had to recover from backup, how long would that take?  And we're assuming here that you don't actually lose any data in the process, the backup is successful.  How long would it take you to restore the system to it's previous functionality?  I'm betting you'd have to go and buy drives, because you probably don't stock spares.  So there's 2 or 3 hours before you can even get started, assuming you can find the replacements locally.  Then, maybe an hour to put the oeprating system back on, a whole bunch of configuration (another hour or two), and then begins the task of restoring the data.  By the time you're done, it could easily be 20-24 man hours of work.  Now, on the cheap side, figure out how much you're paid per hour.  If that number * 24 is more than the cost difference of the drives, it's a no-brainer.  If it's NOT, then consider this:  What other more valuable tasks could you be performing for the company that have the potential to generate revenue during those 24 hours, and what is the $$ value of those tasks?  Also consider that even though it's not mission-critical, the data is still useful and needed, and it will end up being an inconvenience for the other employees..   Factor that into the equation, and the cost difference begins to look a LOT smaller...


HTH,
exx
Nobody can predict the future.   Statistically, you don't have a large enough sample size to insure you make the proper decision.  All drives have 100% certainty of eventual failure.  Statistically the odds favor enterprise drives.

If you are the kind of person that buys auto insurance with a low deductible, then buy the more expensive enterprise drives, 'cuz you are a pessimist at heart, otherwise, be an optimist and roll the dice and maybe you will save some money :)
=) agree with dlethe. As I told before - all drives die. And imho the Enterprise Class word combination is just marketing slogan.
In retrospect, I should have answered .. "Hey, if I could tell you whether or not your new disks are going to fail before you replace them, then I wouldn't be behind a keyboard, I'd be on my 500' yacht. " 
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I find the debate between the terminology of "enterprise class" and the marketing hype of "enterprise class" to be hilarious.  The OP didn't ask anything about whether enterprise class was stamped on the drive or not.
Anyway, the key to using less expensive equipment is redundancy, thus RAID.  
And, of course, backups but more importantly the known ability to restore efficiently.  I have been in situations where the client never tested the restore and they should then assume their data is not backed up.  I am working with another customer *today* whose restores run nearly 10x the time required to perform backups, so it took them the entire working day to restore a database that should have taken 2 hours.  If you can't restore it, it isn't backed up.
 
Excuse me, but actually, that's the entire content of the question - Enterprise class drives vs consumer-grade drives.  Perhaps re-reading the original question would help you formulate a more on-point answer?


-exx
Avatar of Mike Rolfs
Mike Rolfs

True enterprise-class drives are neither faster nor bigger.  Rather they are generally single platter, one-side, and slower spindle speed.  This reduces vibration and temperature build-up as well as reducing the number of moving parts to a minimum.  
Well, Mrolfes, I am in the storage biz for over 20 years, working for manufacturers.   Desktop drives are NOT designed for 24x7.  I have been field engineers for companies that ship over a million drives a year, and know what the numbers are for various makes and models.   You are also quite wrong on one of the reasons for drive failures .. heat is actually a friend, cold is the enemy. (Up to the point).  


Sorry, didn't mean to flame you, but if you don't think there are real differences in data integrity, reliability, duty cycle and environmental tolerance (forget performance benefits) , between enterprise & consumer class drives, then you need to read up.



Certainly didn't mean to ruffle any feathers, I just read it differently.
An interesting point is that RAID is currently defined to mean a "redundant array of independent disks".  I say currently because the original definition was "redundant array of INEXPENSIVE disks".  The manufacturers didn't like the implications of that terminology so they modified the definition, but the facts still stand.
Good luck....
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Woohoo flame war!  Bring it on!  Oh wait this isn't 4chan, nevermind...

I'll admit it's been a few years since I've been out of working directly with storage at a low level, but as far as I know the basics haven't changed appreciably.  Also I'm not sure where I said heat is the enemy, rather I said temperature fluctuation is the enemy.  Also I don't have as big of a sample size to work with, however my observations from experience point to the most common cause of accelerated drive failure (remember they will all fail at some point) is environmental.
"Enterprise class" is definitely not marketing hype. If you actually read specifications, you will see, as mentioned in one of the very early posts, that even enterprise SATA drives vs consumer SATA drives have different MTBF statistics.
In short, do all drives fail eventually? Of course. But when and how they fail is consistent enough that manufacturers can actually generate statistical numbers to attach to those failures. And enterprise class drives are engineered to run longer, hotter, and thus have a longer average (mean) time between failures. In most countries, when numbers such as these are published, LAW SUITS can be filed if they are proven to be false. Non-numerical data can be hype, but hard numbers are factual, testable, and in our litigious society, not usually disputed unless fraud is occurring.
With that said, I will re-iterate, this has been born out in "real world" experiences. You talk to any IT service provider who has technicians doing significant hardware support. Look at production hard drive failures (even in desktops where usage patters are lower than that of a server) and see what types of drives fail the most, taking into account the ratio of drives in use (obvioulsy desktop drives are more deployed so their rate of failure failing to take that into account will be skewed.) But if you break it down by product type, desktop drives fail more often.
---
On a complete aside, it is clear from some of the posts that not everybody understands hardware and storage. Without naming names, claims that desktop drives are designed to be run 24/7, that enterprise drives are generally slower than desktop drives, and a multitude of other claims have been made that can be patently disproven by browsing drive specs, a quick wander to any technical forum written by professionals who are hired to know these things (not a sleight on EE, but here anybody can contribute, and there is no decisive distinction of someone's technical proficiency either way, whereas you can usually rely that the technical specs you get from, say, ars technica or tomshardware has been vetted). So take everything you read here with a grain of salt. The waters have been so clouded now that I wouldn't trust anything here (even the stuff I just wrote, classic catch-22), but turn to 3rd-party sources to verify each claim.
---
And on a much further aside, while it is true that the definition of RAID did change, at the time that RAID was defined, "inexpensive" was not meant to convey desktop vs enterprise, or even ATA/IDE vs SCSI. Back then, mainframes were still prevalent and many companies still had completely proprietary schemes for data protection and the drives were very proprietary and *very* expensive. So "inexpensive" only meant that you could purchase a replacement disk without jumping through hoops, going through a specific vendor, matching serial numbers, and other such nonsense.
As those other systems died, there was a desire to change RAID to more accurately reflect the market changes that had occurred.
And strangely, we've come full circle. New HP and Dell servers require specific drives on their RAID controllers, so buying a generic add-on or replacement is not an option anymore. While the connectors and instruction sets are no longer proprietary, the lockdown is performed by the RAID firmware and thus the restriction from generic disks is a throwback to the days gone by. What was old is new again.
And now I'm done walking down memory lane.
Hope some of that helps, or at least helps shine a light on what has become a ridiculously complex answer to what should have been a simple question. And I hope the original poster can find some value in the mess of responses s/he got.
Thanks,
-Cliff
 
We're geeks, we have to make simple things complex.  It's what we do ;)

For an extremely good (if slightly dated) reference on storage technology, I would highly recommend reading through Storage Review's Reference Guide located at:

http://www.storagereview.com/storage_reference_guide