Link to home
Start Free TrialLog in
Avatar of sglee
sglee

asked on

Enterprise Grade Hard Drive Recommendation

Hi,
  I have a relatively old server that I am  trying to use as Hyper-V replica server.
  It has Intel® Server Board S5500BC,  Xeon E5645 2.4GHZ 12M (6 core) CPU, 32GB RAM, LSI Logic SAS9260-8I SGL.
  I am going to set up Hyper-V OS on two SSD drives on RAID 1  and looking for enterprise grade SATA 7200rpm Hard Drives by either Seagate or WD. Preferred capacity is 8, 12, 14 or 16TB on RAID 1.
 What make/model hard drive have best track record? What make/model would you recommend?

Thanks.
ASKER CERTIFIED SOLUTION
Avatar of kevinhsieh
kevinhsieh
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I've had good results with western digital enterprise drives, which came with many Dell devices
You will get better performance with SAS drives compared to SATA drives.
Avatar of sglee
sglee

ASKER

How many VMs do you plan to replicate?  --> a few VMs, Domain controller, App server (where user files & folders reside) and a VPN server.

I have not done Hyper-V replication previously, so I don't know how it is going to work. I know 15k SAS or SSD drives are much faster than 7.2k SATA except I was considering them because of a low cost with ample space.
Total HD space needed by these VMs is less than 1.5TB. So I guess I can get two 2TB SSDs too.
I like SAS drives except they don't provide a lot of space.

"a pair of 7.2K drives in RAID 1 is suitable for running about 1 low usage VM such as a DC" -->
If you are speaking from your experience in replicating VMs, I could go with qty 4 of  2 TB SSD drives on RAID 10 and a couple of hot spares.
My assumption is that you need to be able to actually run the VMs that you replicate.

10K SAS drives are available up to 2.4 TB last I checked, but SSD is actually cheaper per GB. The Micron ION 5200 QLC drives are very cost effective, and should provide enough performance and capacity. Use two in RAID 1 or 3 or more in RAID 5. In January I am going to buy 24 7.68 TB drives in RAID 6 for my primary backup server. It will cost less than 48 2.4 TB drives I bought the year before, has better read and write performance, and more capacity.
Cheapest option will be something like an array of 16TB EXOS drives, for $400ish USD each on Amazon.

Next June the 18TB + 20TB EXOS drives release, so you might prefer to wait till then, if you're building out a large RAID array.
@David do you think OP would actually be able to run at least 3 VMs with acceptable performance on a pair of EXOS 7.2K drives?
one of the 7E8's yes
Exos 7E8 8TB 512e SAS SED ST8000NM006A
Exos 7E8 8TB 512e SATA       ST8000NM000A
Exos 7E8 8TB 4Kn SATA SED-FIPS ST8000NM009A

3 Drives is RAID 1 with a spare or RAID 5.  What I would do is OS Drive is SSD 512MB/1TB
Change the Hyper-V Settings to use other than the System Drive

It really depends upon WHAT the VM's actually do. DC's are low load.. WSUS is low load and not needing much performance.
Session based RDS with up to 50 office workers should work fine.
Session Based RDS with UPD   40 office workers
Common Virtual Machine          20 office workers
Personal VM                                 20 office workers
215 MB/S per disk - 215MB write, 430 MB read max performance.
Since you have specified SATA drives I'm assuming that you have a budget that does not include SAS or other high-performance drives.

WD enterprise-grade RE3 and RE4 series drives are widely available off-lease on fleabay at reasonable prices.  I've never had one of these fail.  (But I have been stabbed in the back by Seagate repeatedly.)

https://www.ebay.com/sch/i.html?_from=R40&_trksid=m570.l1313&_nkw=wd+%28re3%2Cre4%29+-250gb+-500gb+-1gb&_sacat=56083&LH_TitleDesc=0&_osacat=56083&_odkw=wd+%28re3%2Cre4%29

When buying used, ask the seller to check the start-stop count and the runtime on several of the drives using SMART.  In an enterprise class drive, start-stop count is more important than runtime and a low start-stop count will confirm or deny that the drives were actually used in server farms.
I realized that there are two Davids on this thread.

I killed a server with a pair of SATA drives running just 2 instances. The host as a DC, file server, and Hyper-V role, and a RD session host as a VM. RDP above 2 or 3 users and logins started to fail. The issue is latency for IOPS, not throughput, so I don't see how a new high density drive with same RPM can deliver significantly more IOPS with better latency.

Far better IMHO to get "slow" SSD or 6-12 HDD in RAID 10.
You asked specifically...

1) I am going to set up Hyper-V OS on two SSD drives on RAID 1  and looking for enterprise grade SATA 7200rpm Hard Drives by either Seagate or WD. Preferred capacity is 8, 12, 14 or 16TB on RAID 1.

Either brand works well.

2) What make/model hard drive have best track record? What make/model would you recommend?

I've been running both brands for 20+ years (various sizes + model). Only one disk failure (a WD 6TB drive) over this 20+ year period.

So pick whatever works best for you.

I've never heard of a 16TB SSD drive. To get this size from SSD drives will be far more expensive than using mechanical drives.

3) You've added a new question about IOPs, which requires context.

For example, IOPs alone has no meaning.

Rarely will you be concerned with pure IOPs, as you're skipping over the entire OS or SQL buffering system.

Suggestion: To explore IOPs, open a new question asking about this, providing a description of your actual code involved, as IOP consideration will be different for SQL, Web servers, IMAP mail servers. Every type of application code has specific access patterns which involve both IOPs + core memory buffering, so if core memory buffering is of sufficient size + correctly tuned, IOPs becomes meaningless... except for very specific types of disk access patterns... which involve continuous heavy writing, like analog to digital conversion + recording to disk...
You asked, "do you think OP would actually be able to run at least 3 VMs with acceptable performance on a pair of EXOS 7.2K drives?"

My rule of thumb is never guess, always know.

So I'd setup the RAID system + then do some testing on each VM, running simulated loads at the application level.

In other words, rather than testing raw I/O (a poor test), I'd run something like sysbench to simulate SQL load or better write some code to actually duplicate types of SQL queries generally produced on each given VM.

To me, testing always provides better guessing than... well... guessing or using artificial load testing...

As stated above RAID1 will provide best performance, while RAID10 provides good performance with far more flexibility.
RAID 1 on a pair of 7.2K drives should provide worst performance of any available drive configuration.

It isn't about being able to run each VM individually, but all VMs concurrently that is important, and the impact of the IO blender on spinning media.