Advice on RAID Setup


So rebuilding a network and having to use existing hardware due to budget limitations, until we can replace hardware next year.

The server itself is an old Dell PowerEdge R900 - specced out with multiple CPU's and ~64GB of RAM. It has the PERC 6 / i RAID controller with 5 x 2TB hard disks.

My initial plan was 1 x RAID 10 Array with a hotspare. This worked and Windows loaded, however due to its age and non UEFI BIOS, it can't see anything more than 2TB. So i went back to the beginning and tried to create 2 partitions within the Windows server setup screen - but again it will create one partition and then throws errors trying to create additional partitions on the 1.6TB of remaining hard disk (doesn't matter what size).

From reading online it appears you can't create multiple virtual drives on a single RAID 10 array like i've done in the past with HP servers.

This leaves me with two options that i can think of..

1) Create 2 x RAID 1 mirrors and use 1 drive as a hot spare. This will give me 2 x ~2TB drives

2) Create 1 x RAID 1 mirror and 1 x RAID 5 with no hot spares (I actually have a physical spare drive so the hotspare isn't critical). This will give me maximum storage.

The server will run 1 x 2016 Domain controller for about 20 users, 1 x 2008 RDS server for 3 - 4 users, and i was planning to run 1 x application/file server too.

Open to suggestions and ideas.
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Panagiotis ToumpaniarisSystem ArchitectCommented:
If you're on site, meaning physically close to the server, all the time, and have decent backup configuration, go with no.2.
Hot spare is really for those times that nobody is around to swap disks quickly. Or for the really unlucky times, when we have multiple drive failures.
Otherwise the first choice is safer, even if you lose 2TB worth of space.
But it surprises me how limited perc 6 is for a system that's around for some 5-6 years...

EDIT : although r900 is around 6 yo perc 6 is older it seems..
Panagiotis ToumpaniarisSystem ArchitectCommented:
By the way,

have you tried partitioning the raid volume with dell's virtual disk management from raid's BIOS ?
I mean creating 2 VDs in the same group of PDs ? Does it have the same limitations?
Since you're 100% sure that the server doesn't support UEFI (supposedly a subset of R900s do), I would go with option 1. That way you can always have a spare at the ready, given your storage limitations.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Introduction to R

R is considered the predominant language for data scientist and statisticians. Learn how to use R for your own data science projects.

elemistAuthor Commented:
I've had a solid hunt around in the BIOS - it's actually really very basic. It's also running the latest BIOS revision (1.2.0) according to the Dell support site. So looks like no joy with going to UEFI.

I'm leaning towards the two RAID 1's myself, as i think it will perform better than a RAID 5? Plus the two important servers (DC & file/print) can go onto an array with a hot spare, and the less important but probably more intensive RDS server can go onto the second array.

I did also have a hunt around in the RAID controller configuration that you can get into on boot up. I can't see anyway to create multiple logical drives on a single RAID 10 volume. I'm not that familar with Dells, mostly a HP guy. I know on the ML350's you can create a RAID volume, then multiple logical disks on top of it that the OS could see. Ideally that would be the solution here, but alas doesn't appear to be possible from what i can see.
If performance is a major factor, then go with option 2. I've done option 1 with RDS servers without issue. However, the reason I named option 1 to begin with is because of the fact that you still would've had the same storage challenge. So performance might be a bit better for RDS using RAID 5, but your storage capacity would be exactly the same because of lacking UEFI.
Panagiotis ToumpaniarisSystem ArchitectCommented:
Raid 5 is faster than raid 1. Especially the more drives you put on it.

Ok, so I would suggest trying the following:

Create a disk group.
Create 2 virtual disks in the disk group (the disk group should contain all physical disks). Use either raid 5 or better raid 6 for those VDs, the first one should be <2tb.

If it doesn't work try putting all drives on the first channel of the controller.

Theoretically it should work!
Philip ElderTechnical Architect - HA/Compute/StorageCommented:
In this situation, we would set up two logical disks on the RAID controller:
95GB Bootable for host OS
Balance GB/TB for data

Boot to the Windows installer routine and install to the 95GB partition.

Once the OS is in use either PowerShell or Disk Management to set up that second partition and go.
PowerEdgeTechIT ConsultantCommented:
Go with option 1 - two RAID 1's.

- The R900 doesn't do UEFI.
- You cannot do multiple VD's across the same set of disks with nested RAID levels (10/50/60).
- You should not be using RAID 5 with large disks.

RAID 6 would be ok - a little slower than other RAID types, but you could carve out a smaller VD for the OS and a larger one for the data.

Two RAID 1's is the option I'd recommend.
PowerEdgeTechIT ConsultantCommented:
In this situation, we would set up two logical disks on the RAID controller:
95GB Bootable for host OS
Balance GB/TB for data
Must be split in the RAID controller as separate VD's, which will be presented to the OS as separate disks. You can't get around the 2TB limit with partitioning in the OS alone.
Option 2 doesn't really make sense.

A 3 disk RAID 5 in option 2 is really worthless and will be much slower than a 2 disk RAID 10.  Also, the disks are 2TB.  In RAID 5, you're really reaching the limit in RAID rebuild times.  With 1 TB drives, you could probably survive a single disk failure and rebuild in sufficient time before a 2nd disk fails.  At 2 TB, you may not rebuild in time.
So ,this is a hyper v server?
I would set up a raid 1 boot drive using the built in controller and set the rest of the drives up using JBOD and use REFS for storing the VM's.
Since you don't boot the REFS  ,it should work.
elemistAuthor Commented:
Ended up going 2 x RAID 1 sets. Was also able to set the spare disk as a global hot spare across both arrays, so best of both worlds.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.