elemist
asked on
Advice on RAID Setup
Howdy,
So rebuilding a network and having to use existing hardware due to budget limitations, until we can replace hardware next year.
The server itself is an old Dell PowerEdge R900 - specced out with multiple CPU's and ~64GB of RAM. It has the PERC 6 / i RAID controller with 5 x 2TB hard disks.
My initial plan was 1 x RAID 10 Array with a hotspare. This worked and Windows loaded, however due to its age and non UEFI BIOS, it can't see anything more than 2TB. So i went back to the beginning and tried to create 2 partitions within the Windows server setup screen - but again it will create one partition and then throws errors trying to create additional partitions on the 1.6TB of remaining hard disk (doesn't matter what size).
From reading online it appears you can't create multiple virtual drives on a single RAID 10 array like i've done in the past with HP servers.
This leaves me with two options that i can think of..
1) Create 2 x RAID 1 mirrors and use 1 drive as a hot spare. This will give me 2 x ~2TB drives
2) Create 1 x RAID 1 mirror and 1 x RAID 5 with no hot spares (I actually have a physical spare drive so the hotspare isn't critical). This will give me maximum storage.
The server will run 1 x 2016 Domain controller for about 20 users, 1 x 2008 RDS server for 3 - 4 users, and i was planning to run 1 x application/file server too.
Open to suggestions and ideas.
So rebuilding a network and having to use existing hardware due to budget limitations, until we can replace hardware next year.
The server itself is an old Dell PowerEdge R900 - specced out with multiple CPU's and ~64GB of RAM. It has the PERC 6 / i RAID controller with 5 x 2TB hard disks.
My initial plan was 1 x RAID 10 Array with a hotspare. This worked and Windows loaded, however due to its age and non UEFI BIOS, it can't see anything more than 2TB. So i went back to the beginning and tried to create 2 partitions within the Windows server setup screen - but again it will create one partition and then throws errors trying to create additional partitions on the 1.6TB of remaining hard disk (doesn't matter what size).
From reading online it appears you can't create multiple virtual drives on a single RAID 10 array like i've done in the past with HP servers.
This leaves me with two options that i can think of..
1) Create 2 x RAID 1 mirrors and use 1 drive as a hot spare. This will give me 2 x ~2TB drives
2) Create 1 x RAID 1 mirror and 1 x RAID 5 with no hot spares (I actually have a physical spare drive so the hotspare isn't critical). This will give me maximum storage.
The server will run 1 x 2016 Domain controller for about 20 users, 1 x 2008 RDS server for 3 - 4 users, and i was planning to run 1 x application/file server too.
Open to suggestions and ideas.
SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER CERTIFIED SOLUTION
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
ASKER
I've had a solid hunt around in the BIOS - it's actually really very basic. It's also running the latest BIOS revision (1.2.0) according to the Dell support site. So looks like no joy with going to UEFI.
I'm leaning towards the two RAID 1's myself, as i think it will perform better than a RAID 5? Plus the two important servers (DC & file/print) can go onto an array with a hot spare, and the less important but probably more intensive RDS server can go onto the second array.
I did also have a hunt around in the RAID controller configuration that you can get into on boot up. I can't see anyway to create multiple logical drives on a single RAID 10 volume. I'm not that familar with Dells, mostly a HP guy. I know on the ML350's you can create a RAID volume, then multiple logical disks on top of it that the OS could see. Ideally that would be the solution here, but alas doesn't appear to be possible from what i can see.
I'm leaning towards the two RAID 1's myself, as i think it will perform better than a RAID 5? Plus the two important servers (DC & file/print) can go onto an array with a hot spare, and the less important but probably more intensive RDS server can go onto the second array.
I did also have a hunt around in the RAID controller configuration that you can get into on boot up. I can't see anyway to create multiple logical drives on a single RAID 10 volume. I'm not that familar with Dells, mostly a HP guy. I know on the ML350's you can create a RAID volume, then multiple logical disks on top of it that the OS could see. Ideally that would be the solution here, but alas doesn't appear to be possible from what i can see.
If performance is a major factor, then go with option 2. I've done option 1 with RDS servers without issue. However, the reason I named option 1 to begin with is because of the fact that you still would've had the same storage challenge. So performance might be a bit better for RDS using RAID 5, but your storage capacity would be exactly the same because of lacking UEFI.
Raid 5 is faster than raid 1. Especially the more drives you put on it.
Ok, so I would suggest trying the following:
Create a disk group.
Create 2 virtual disks in the disk group (the disk group should contain all physical disks). Use either raid 5 or better raid 6 for those VDs, the first one should be <2tb.
If it doesn't work try putting all drives on the first channel of the controller.
Theoretically it should work!
Ok, so I would suggest trying the following:
Create a disk group.
Create 2 virtual disks in the disk group (the disk group should contain all physical disks). Use either raid 5 or better raid 6 for those VDs, the first one should be <2tb.
If it doesn't work try putting all drives on the first channel of the controller.
Theoretically it should work!
In this situation, we would set up two logical disks on the RAID controller:
95GB Bootable for host OS
Balance GB/TB for data
Boot to the Windows installer routine and install to the 95GB partition.
Once the OS is in use either PowerShell or Disk Management to set up that second partition and go.
95GB Bootable for host OS
Balance GB/TB for data
Boot to the Windows installer routine and install to the 95GB partition.
Once the OS is in use either PowerShell or Disk Management to set up that second partition and go.
Go with option 1 - two RAID 1's.
- The R900 doesn't do UEFI.
- You cannot do multiple VD's across the same set of disks with nested RAID levels (10/50/60).
- You should not be using RAID 5 with large disks.
RAID 6 would be ok - a little slower than other RAID types, but you could carve out a smaller VD for the OS and a larger one for the data.
Two RAID 1's is the option I'd recommend.
- The R900 doesn't do UEFI.
- You cannot do multiple VD's across the same set of disks with nested RAID levels (10/50/60).
- You should not be using RAID 5 with large disks.
RAID 6 would be ok - a little slower than other RAID types, but you could carve out a smaller VD for the OS and a larger one for the data.
Two RAID 1's is the option I'd recommend.
In this situation, we would set up two logical disks on the RAID controller:Must be split in the RAID controller as separate VD's, which will be presented to the OS as separate disks. You can't get around the 2TB limit with partitioning in the OS alone.
95GB Bootable for host OS
Balance GB/TB for data
Option 2 doesn't really make sense.
A 3 disk RAID 5 in option 2 is really worthless and will be much slower than a 2 disk RAID 10. Also, the disks are 2TB. In RAID 5, you're really reaching the limit in RAID rebuild times. With 1 TB drives, you could probably survive a single disk failure and rebuild in sufficient time before a 2nd disk fails. At 2 TB, you may not rebuild in time.
A 3 disk RAID 5 in option 2 is really worthless and will be much slower than a 2 disk RAID 10. Also, the disks are 2TB. In RAID 5, you're really reaching the limit in RAID rebuild times. With 1 TB drives, you could probably survive a single disk failure and rebuild in sufficient time before a 2nd disk fails. At 2 TB, you may not rebuild in time.
So ,this is a hyper v server?
I would set up a raid 1 boot drive using the built in controller and set the rest of the drives up using JBOD and use REFS for storing the VM's.
Since you don't boot the REFS ,it should work.
I would set up a raid 1 boot drive using the built in controller and set the rest of the drives up using JBOD and use REFS for storing the VM's.
Since you don't boot the REFS ,it should work.
ASKER
Ended up going 2 x RAID 1 sets. Was also able to set the spare disk as a global hot spare across both arrays, so best of both worlds.
have you tried partitioning the raid volume with dell's virtual disk management from raid's BIOS ?
I mean creating 2 VDs in the same group of PDs ? Does it have the same limitations?