NVMe drives in low-end server

My clients are all one-server companies with less than 30 users.  I'm looking at suggesting a new server for one of them and wanted to gather opinions on my choice of drives.

The server will have one 8-core CPU and will run Windows Server 2016 as a host.  A Dell PowerEdge T430 is an example of what I may recommend.  There will be two VMs, one as the DC and the other for all the rest (file and print sharing, applications, etc.).

My question has to do with the storage arrangement.  In the past I'd have used a hardware RAID controller with some RAM on it and four SAS drives configured as two RAID 1 arrays.  The first would be for the Host and the DC VM where the second would be entirely for the Application VM.  

My present thoughts are quite different.  I'm leaning toward two 512G NVMe SSDs (Samsung 960 Pro) for the OS and DC VM and a pair of 960G NVMe SSDs for the Application VM.  Each pair would be mirrored using Windows mirroring, not a RAID controller.  The SSDs would be physically connected to the system by four PCIe cards.

I recognize that I lose the battery backup capabilities of a good hardware RAID controller.  I'm not so concerned about that as the server itself will have a good battery backup.  The minimal data loss in case of a power and battery backup failure would be tolerable.

The key here is finding a good balance between cost and performance.  This arrangement seems to hit that mark.

I'm also wondering about whether the additional cost for the first pair of NVMe drives is worth it.  As an alternative, I could use two SAS drives for the Host (using Windows to mirror) and then decide whether to put the DC on them or on the NVMe pair.  I recognize that this would significantly increase boot time, but that shouldn't be a big issue.

I'd appreciate any comments regarding whether or not you feel this is a good approach.
LVL 23
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Cliff GaliherCommented:
Honestly, while every situation is different, being you seem to be generalizing your clients, I'll generalize my response.  NVMe is overkill for that environment.  You won't get significant improvements on small servers, but will be paying significantly more for it.

I'm also a bit concerned about you using "windows mirroring."   That's a bit vague.  Windows has dynamic disks, which can technically be mirrored, but is considered a dead technology and wouldn't really fit your needs anyways.

Then there is storage spaces, which is a great solution actually. But cannot be implemented on the system drive (and thus would also not be protecting the DC VM.)

If I were architecting this, I'd make the host drives a RAID 1 disk set. Maybe SSDs if boot speed is a real concern during servicing/maintenance windows. Otherwise SAS. And keep 'em small and only use the host.  Then make your storage space with the rest of the drives you want. *MAYBE* NVMe, but probably more traditional SSD, and maybe even spinning spindles.  And carve them up with partititions for VMs, instead of separate full arrays.  Even if you forego storage spaces for a traditional RAID setup, I'd do similar. RAID1 for the host OS and RAID10 with the rest, partitioned, for the guests.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
RobertSystem AdminCommented:
Depending on which hypervisor your planning to use the answer may be different.
If your using VMware I would recommend a different path.
For example most servers now come with an internal SD card slot. With VMware I usually load the Host onto that then use the disk storage for my data store.
Based on my understanding VMware runs from memory so the only time it really uses the SD card is during boot up.
That said I am sure some will disagree and say that SD is not reliable enough but I have never had issues with that.

Now if your running HyperV that is another story as it does actually page etc to the disk so I don't believe it is even a supported method from MS.

As for the data store I would recommend a Raid 5 at minimum if it is a production server.
CompProbSolvAuthor Commented:
Sorry.... I should have been more complete in my post.

I will be using Hyper-V on the host.

My comment about "Windows Mirroring" was that I was going to use Dynamic Disks and then have Windows mirror them.  I'll look further into Storage Spaces instead for the VM drive pair.  That may fit well with using two less-expensive drives for the Host only and then two NVMe drives to hold the VMs.

As far as RAID 10 goes, it seems to me that two drives in RAID 1 will be more than enough capacity.  Is there a good reason to use 10 instead?

As far as RAID 5 goes, I've not been a fan of that for quite a while.  To do it well requires an expensive RAID controller, which will likely result in a higher cost overall.
Acronis Data Cloud 7.8 Enhances Cyber Protection

A closer look at five essential enhancements that benefit end-users and help MSPs take their cloud data protection business further.

Cliff GaliherCommented:
With only two drives raid 1 is fine. But you were initially talking about historically doing four drives and having workloads on both in two RAID 1 sets. For that  I'd rather do RAID 10 instead of two RAID 1 sets.  And I still don't like running workloads off the system drive.
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Intel DC S3520 SATA SSDs are more than enough both in performance and endurance for most workloads and are quite inexpensive. Connect them to a 1GB-4GB hardware RAID with flash backed cache and RAID 6. Done.

For entry level server setups that's where we'd start when all-flash is required.

NVMe is appropriate as a cache for hyper-converged in Storage Spaces Direct (S2D), as a local cache for SQL Temp, local high performance storage for rendering farms, and the like. It is not appropriate, IMNSHO, for customer facing production virtualization environments.

My EE article: Some Hyper-V Hardware and Software Best Practices.
CompProbSolvAuthor Commented:
Thanks to all for the input.

@Phillip Elder: what is it about NVMe drives that causes you to consider them inappropriate in this case?
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
Two things:
1: Cost versus benefit: A simple setup using SATA SSDs in RAID 6 would be more than enough and a lot less expensive.
2: Redundancy: Where is it in NVMe? There are new RAID controllers that can do two or four NVMe drives. What then?
CompProbSolvAuthor Commented:
Thanks for the input and will consider it carefully when configuring new systems.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.