Link to home
Start Free TrialLog in
Avatar of CompProbSolv
CompProbSolvFlag for United States of America

asked on

NVMe drives in low-end server

My clients are all one-server companies with less than 30 users.  I'm looking at suggesting a new server for one of them and wanted to gather opinions on my choice of drives.

The server will have one 8-core CPU and will run Windows Server 2016 as a host.  A Dell PowerEdge T430 is an example of what I may recommend.  There will be two VMs, one as the DC and the other for all the rest (file and print sharing, applications, etc.).

My question has to do with the storage arrangement.  In the past I'd have used a hardware RAID controller with some RAM on it and four SAS drives configured as two RAID 1 arrays.  The first would be for the Host and the DC VM where the second would be entirely for the Application VM.  

My present thoughts are quite different.  I'm leaning toward two 512G NVMe SSDs (Samsung 960 Pro) for the OS and DC VM and a pair of 960G NVMe SSDs for the Application VM.  Each pair would be mirrored using Windows mirroring, not a RAID controller.  The SSDs would be physically connected to the system by four PCIe cards.

I recognize that I lose the battery backup capabilities of a good hardware RAID controller.  I'm not so concerned about that as the server itself will have a good battery backup.  The minimal data loss in case of a power and battery backup failure would be tolerable.

The key here is finding a good balance between cost and performance.  This arrangement seems to hit that mark.

I'm also wondering about whether the additional cost for the first pair of NVMe drives is worth it.  As an alternative, I could use two SAS drives for the Host (using Windows to mirror) and then decide whether to put the DC on them or on the NVMe pair.  I recognize that this would significantly increase boot time, but that shouldn't be a big issue.

I'd appreciate any comments regarding whether or not you feel this is a good approach.
ASKER CERTIFIED SOLUTION
Avatar of Cliff Galiher
Cliff Galiher
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Depending on which hypervisor your planning to use the answer may be different.
If your using VMware I would recommend a different path.
For example most servers now come with an internal SD card slot. With VMware I usually load the Host onto that then use the disk storage for my data store.
Based on my understanding VMware runs from memory so the only time it really uses the SD card is during boot up.
That said I am sure some will disagree and say that SD is not reliable enough but I have never had issues with that.

Now if your running HyperV that is another story as it does actually page etc to the disk so I don't believe it is even a supported method from MS.

As for the data store I would recommend a Raid 5 at minimum if it is a production server.
Avatar of CompProbSolv

ASKER

Sorry.... I should have been more complete in my post.

I will be using Hyper-V on the host.

My comment about "Windows Mirroring" was that I was going to use Dynamic Disks and then have Windows mirror them.  I'll look further into Storage Spaces instead for the VM drive pair.  That may fit well with using two less-expensive drives for the Host only and then two NVMe drives to hold the VMs.

As far as RAID 10 goes, it seems to me that two drives in RAID 1 will be more than enough capacity.  Is there a good reason to use 10 instead?

As far as RAID 5 goes, I've not been a fan of that for quite a while.  To do it well requires an expensive RAID controller, which will likely result in a higher cost overall.
With only two drives raid 1 is fine. But you were initially talking about historically doing four drives and having workloads on both in two RAID 1 sets. For that  I'd rather do RAID 10 instead of two RAID 1 sets.  And I still don't like running workloads off the system drive.
Intel DC S3520 SATA SSDs are more than enough both in performance and endurance for most workloads and are quite inexpensive. Connect them to a 1GB-4GB hardware RAID with flash backed cache and RAID 6. Done.

For entry level server setups that's where we'd start when all-flash is required.

NVMe is appropriate as a cache for hyper-converged in Storage Spaces Direct (S2D), as a local cache for SQL Temp, local high performance storage for rendering farms, and the like. It is not appropriate, IMNSHO, for customer facing production virtualization environments.

My EE article: Some Hyper-V Hardware and Software Best Practices.
Thanks to all for the input.

@Phillip Elder: what is it about NVMe drives that causes you to consider them inappropriate in this case?
Two things:
1: Cost versus benefit: A simple setup using SATA SSDs in RAID 6 would be more than enough and a lot less expensive.
2: Redundancy: Where is it in NVMe? There are new RAID controllers that can do two or four NVMe drives. What then?
Thanks for the input and will consider it carefully when configuring new systems.