Link to home
Start Free TrialLog in
Avatar of sglee
sglee

asked on

Type of Hard Drive for Virtual Machines in Hyper-V Server

Hi,
 I am trying to determine which type of hard drive would be the best choice in Hyper-V Server with 4-5 VMs (Domain Controller, Terminal Server, File/Print Server), but one of them will be running Microsoft SQL Server 2017 with 30GB database by about 10 users. I have not decide on the make of the server, but considering HP Proliant DL380 or SuperMicro 6029P-TRT.

 There are three make/model (if I choose to go with SuperMicro server, I will get Seagate equivalent models)
 1. HP Part#: 781518-B21 : Enterprise  1.2 TB - hot-swap - 2.5" SAS 12Gb/s - 10000 rpm
 2. HP Part#: 759212-B21 - Enterprise  600GB - hot-swap - 2.5" SAS  12Gb - 15000 rpm
 3. HP Part#: 817051-001 - 1.92TB SAS 12G RI SFF SSD

I was told previously that you don't want to use SSD if you are running SQL server, but I don't know if that still holds true.
Avatar of Tom Cieslik
Tom Cieslik
Flag of United States of America image

I have build my Hyper-V host server based on Intel SSD drives and in one year I had to replace 6 disks.
Then I did build my second Hyper-V host server based on SAS 15K 900GB  ST900MP0006 disks and all is workin OK for a year now withour any problems and so much faster than raid build base on SSD drives.


I don't know nothing about disks you want to use like
. HP Part#: 759212-B21 - Enterprise  600GB - hot-swap - 2.5" SAS  12Gb - 15000 rpm

but I think this would be best choice, or use any other SAS 15K disks
Avatar of sglee
sglee

ASKER

Tom,

It is good to know that now we can buy 900 GB SAS 15K 12GBPS hard drives. Last time when I built a server with SAS hard drive with 15K 12GBPS, the maximum size was 600 GB. And I agree with your experience with SSD. I built several Hyper-v servers with Micron SSD on MegaRAID controller and the speed that I am getting in VMs is truly disappointing. The only advantage is that SSD-based virtual machines boot up very quick.
We have a wide variety of all-flash setups in 1U and 2U form mostly Intel Server Systems running Intel RAID with NV Cache Module.

Other than bad firmware on the early S4610 series SSDs we've had nary an issue.

SATA SSDs will be more than enough to meet the needs of a busy SQL virtual machine. Depending on the RAID controller, you could set up a RAID 1 SSD pair and run a dedicated VHDX set for the database, temp files, and log files.
SQL is something that benefits greatly from SSDs over HDDs, just make sure  you pick ones with the right endurance. You will probably get away with mixed use but you might need write intensive.
Re storage, how much total space do you need, commonly,
600 might be too small these days for an 8 drive to build the datastore for hyper-v of 4-5 systems
1/2 DC 140gb. Max per system
1/2 terminal servers 200 go
1/2 fileserver, print server  depends on how much data it holds in dfs ...
1 SQL server

Are you installing hyper-v core as the host? Running of an sd, nvme?usb stick?
Avatar of sglee

ASKER

"how much total space do you need?"  --->
(1) Domain Controller: 50GB
(2) SQL Server: 300GB
(3) Terminal Server: 100GB
(4) App Server: 500GB

I like to install three Micron 5200 1.9TB SSDs and put them in RAID 1 with one as a hot spare on LSI 3108 12 GB/s Raid Controller and it will handle all VMs.
However my experience with these Micron SSDs on LSI 3108 has not been all that impresiive. No failures so far and yes they boot up really fast. But when I run applications within each VM, they don't seem to be that much responsive than SAS 15K 12 GB/s drives.
If I go with SAS, then I have to get a lot of those and put them in RAID 10.
Micron 5200 eco and pro are both read-intensive, you should be using the 5200 max at the very least, You also should put them in a PC first and use Flex Capacity to short-stroke them to give better write performance (you can't use the Flex Capacity software through a RAID controller but you can still put them on a RAID controller afterwards).
IMHO, distributing the I/onaccross using sas hdd's
Single volume raid 1 1.9 Tb all traffic hits the same device.

I would likely use 4 1.2TB in raid6 or raid10 the fifth to be on hand hot spare still spins and endures ware.

What, where is your bavkup proces ?
Avatar of sglee

ASKER

I backup daily using NAKIVO backup software on Hyper-v server and also do Windows Server Backup weekly.
Read Intensive will have low Write IOPS and Throughput. That would hurt big time.

The S4610 series SSDs are mixed use. They perform adequately for most needs running under 15K IOPS across eight SSDs. That meets most needs. Period.
Avatar of sglee

ASKER

Are you referring to this model?
D3-S4610 Series SSDSC2KG019T801 1.92TB
The HPE one listed in the question is also read intensive, that'll end up with things booting fast but running relatively slowly.
Avatar of sglee

ASKER

Thanks for the recommendations:
(1) Seagate Exos 15E900 ST900MP0006 - hard drive - 900 GB - SAS 12Gb/s
(2) Intel 1.92TB D3-S4610 Internal SSD

With above, I have two questions:
(1) I used to hear that SSD is not good for SQL server. That is not the case anymore?
(2) Regardless which type of HD I go with, would you recommend that I set up Hyper-V OS on RAID 1 and create VMs on RAID 10?

If Intel 1.92TB D3-S4610 Internal SSD is reliable and good for SQL server, then I would like to go with this because I can just set up RAID 1( which will generate 1.9TB disk space) and be done with it. With 900GB SAS HDs, I will have to create RAID 10.
yes, raid 1 for OS.
If you are setting up hyper-v core and your hardware supports mirrored SD cards, consider using the SDs as the boot, you would need largeer capacity SDs for hyper-v core.

Concentrating all effective I/o for VMs on a mirrored single SSD IMHO, is unwise.
IMHO, a pair of small ssds for the OS in raid 1 and then have multiple hdd's
Whoever told you SSDs weren't good for SQL is likely to have used consumer grade ones for it rather than enterprise ones.
To andyalder's point
https://www.seagate.com/enterprise-storage/nytro-drives/nytro-sas-ssd/


Though HP might prefer/require their own...

if cost is .. going with a mixture of hdds and ssd ... but definately not with a single pair of anything where you would run into an I/O backlog.
Our rule of thumb is same make/model and size set up in a RAID 6 array with a high performance RAID controller with 2GB Non-Volatile cache RAM. We deploy all SFF 2.5" bays in an eight bay setting.

Two logical disks set up in the array:
 * LD0: Host OS @ 95GB (bootable)
 * LD1: Data Balance GB/TB

SATA SSD will yield about 150K IOPS depending on how the storage stack is set up. Throughput can hit GiB/Second easily again depending on how the storage stack is set up.

There is really no reason to use SAS unless the IOPS need to push upwards of 400K or more.
Use nfc drives if possible

Note that hp proliant hardware card are crappy at best and should nog be used with the number of drives they can handle.

Ssds can be used if they are SLCs. mlc drives will definitely fail early.

I would concur that a db that actually benefits of using ssds is probably dying anyway and more ram is probably much more worth the expense.
I think you may be out of date in suggesting SLCs, look at HPE SSD Quickspecs for example, not a SLC to be seen, all MLC or TLC. They do have SLC buffers normally admittedly.
SLCs cannot be use for actual data storage with huge volumes because of their costs.

i believe an SLC cache in front of regular SATA drives to be more performant, less costly, and more robust than a bunch of SSDs, but the performance may or may not be better depending on use cases.

for a db, it probably should not matter much unless your indexes do not fit in ram or you regularly issue queries that do full table scans on huge tables... in which case your db is probably dying anyway, and your SSDs will most likely also die young.
Any SLC cache is inside the MLC or TLC drive, not separate, AFAIK nobody makes SLC except for industrial use any more.
Not sure about the point suggesting SLC SSDs since they have not been in production for at least half a decade or more. The only place to get them would be online auction.

Our current standalone Hyper-V tiers are:
 * Entry: 8x or 16x 10K SAS Spindles RAID 6 with NV Cache
 * Entry All-Flash: 8x+ Intel S4610 RAID 6 (cost difference is getting reasonable ... we're deploying less and less spindles)
 * Mid-Performance: 2x NVMe RAID 1 + 8x+ Intel S4610 RAID 6
 * High Performance: All NVMe (Requires hoops to jump through to get to though)

The last option is tough to build in standalone settings. Our go-to for all-flash with NVMe is Azure Stack HCI settings.
Try and search for "ssd disk " in google and you will find about 3 pages of vendors selling slcs.

What hp sells is not the market. actually they sell crappy hardware for unreasonable prices, and have been for as long as the pentium 3 days.
You won't find any SLC for PC/server, you will find industrial SSDs such as https://www.atpinc.com/products/industrial-ssds-2.5ssd   (who wants 77 DWPD)

You'll also find people listing WD Green as SLC whereas they are really TLC with a SLC buffer.
Dashcams probably have SLC in them as they do have a high DWPD being low capacity recording in a loop.
Here is our SATA SSD benchmark. It is the Intel Ark site with a set of delimiters for Enterprise SSDs, thus with Power Loss Protection which is absolutely critical, that we start with for all of our flash based solutions:

Intel Ark Site: SATA Power Loss Protection SSDs
As an FYI:
 * Micron 5210 ION SATA: Low I/O and Throughput SATA SSDs are extremely inexpensive
 * Micron 9300 PRO NVMe U.2 (MTFDHAL15T3TDP-1AT1ZABYY): 15.36TB of NVMe goodness at an amazing price

The new Micron 7300 series 7mm NVMe U.2 are of interest to us in certain settings where drive height is a consideration.

The 5300 PRO SATA is something to watch for.

Micron is currently undercutting all flash storage vendors by huge margins. So much so, that our Intel benchmarks above are becoming just that with Micron product on the way in the new year to bench prior to bringing them in to our solution sets.
No matter what kind of hard drives or SSD drives you will use to build your server (you have a lot of suggestions now so you can decide) remember that always use RAID configuration.
If you can;t afford to bu a lot of disk for Raid 10 that would be recommended for server but will require at last 6 drives, build Raid 5 or Raid 1.
Never depend on single drive running server. !!!

This is very important
raid10 requires 4 drives ( actually 2 but that would produce a raid1 ). not 6.

raid5 with SSDs is hardly ever an option unless you have a very huge read-intensive workload with few writes, and even then there would be better uses for your bucks. if you can afford ssds, you can affort raid10. if you cannot, stick to platters.
raid 10 depending on the implemenation can be a stripe of mirrored drives.
depending on the controller in use it could be 4, 6, 8 etc.
the capcity will be number of disks divided by 2. i.e. you are sucrificing 1/2 of your disks.

Fault tolerance in a raid 10 is one member in every Mirrored set.
>raid5 with SSDs is hardly ever an option...

Then why do manufacturers of all-flash arrays use RAID 5 and RAID 6? See https://www.computerweekly.com/feature/SSD-Raid-101-The-essentials-of-flash-storage-and-Raid for a list of all-flash vendors and the RAID levels they offer (or even force you to use). For those that only have one option it's always a variant of RAID 6.
Because when it was written, the SSD's were still small in capacity and reflect a cost effective way to promote.

the issue with RAID 5 deal with larger capacities and risks related to the duration a rebuild might take versus the same duration having a higher load on similarly aged devices as the one the failed.

RAID 6 somewhat like RAID 5 is being identiied as a risk based on capacity.
i.e. raid 6 of 10 TB disks if the first disk failure goes unnoticed, the risk when the second fails and only then one tries to replace ....... the duration of the rebuild could lead to the third similarly aged device failing ......
Yes, obviously when the first fails, it should be replaced....... to avoid this scenario...

with cheaper equipment, for storage the RAID 10 provides ......quicker rebuilds while no impact on the performance of the system.....but with higher cost...
The RAID 10 versus RAID 5 or RAID 6 question was relevant for spindles. Not so much for SATA/SAS/NVMe solid-state drives.
But they still use the RAID levels in that article.  They don't use RAID 10, it's too expensive for SSDs.

Of course if RAID 1 offers enough space it's the cheapest although it's probably cheaper to use three smaller SSDs.
understand the discussion and distinction, much depends on use /rsk and need.
last few comments deal with RAid type discussion or at least that is what I read into the comments

Here there are several consideration and ........
to use SSds only
To use spindles,
to use a mix of .....

SSDs faster performing, depending on controller, I/O demand on use...

The asker has yet to return.
IMHO single 2TB SSD raid 1 with the asker experiencing SSD failures is a risky ...pattern..
You can have raid 10 ssds. Actually that is what most of my current prod servers use. And we are switching to nvme drives... still raid 10. Neither would be my recommendations for that workset but they have money.

Actual experience show that :
- a raid 10 kills the write performance of a raid 5 wigh the same number of drives by orders of magnitude.
- a bunch of ssds with sas connectors and adequate raid cards is expensive
 If you are gonna do raid5. The same money had better be spent on many more platter disks and possibly a tiny ssd for the log.
- raid 5 often ends with data loss

Articles are one thing. Experience is another. Articles are very often written by either technical writers working for a vendor, or self promoted folks wigh zero experience.
So you're saying the article is wrong and some of the vendors don't force RAID 6 on you?
the article does not really say that. but some vendors do. and they are wrong.

a 6 SSD raid5 array is less performant, provides, much less space, requires additional costs such as SAS connectors and does not outperform 10-12 properly configured SATA drives in a raid10 array in most situations. and if you throw in a couple of small SSDs for the write cache, it will kill the raid5 array by far.

after lots of experimenting, benchmarking, testing, and real production usage, i have not used or recommended a raid5 in years, except for home usage and occasionally very small businesses.

given the use case, i'd recommend to run the SQL server on barebone using raid 1 or 10, SSDs or platters ( depending on existing hardware, available raid cards, space to stick drives in, storage future requirements ). the domain controller, print server, etc could easily run of a cheap desktop machine. i would prefer a decent backup policy over an expensive raid setup.

strive away from raid 5, prefer raid 10, prefer the pseudo raid10 implementation in ZFS, ... and maybe watch for newer SSD-specific implementations of raid 4 that are likely to actually do better at some point.

note that the ZFS implementation is the only one that provides data integrity, allows to use disks of different sizes, allows transparent resilvering without killing your performance for hours/days, ... and quite a few other unique features.
Avatar of sglee

ASKER

Is there 4TB version of Intel D3-S4610 Series SSDSC2KG019T801 1.92TB?
There is a 3.84TB version, enterprise SSDs often have less space than exact TB sizes because they need it for background garbage collection since TRIM cannot be used with parity RAID. They are double the price since it is the NAND rather than the controller chip that costs the most so if you are using parity RAID it is cheaper to use several smaller ones.

Two SSDSC2KG038T801 would cost $2230 for 3.84TB but you could buy 7 off 960GB ones for about the same price which would give the same capacity using RAID 6 plus hot spare as you would get with a pair of 3.84TB ones. You would of course need a decent RAID controller as opposed to a cheap one (or the Intel chipset) that just does mirroring to support RAID1.
Avatar of sglee

ASKER

@andyalder
Thanks for your insight!
The Intel SSD D3-S4610 (Intel Ark product page) comes in sizes up to 7.68TB.
This question needs an answer!
Become an EE member today
7 DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform.
View membership options
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.