• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 341
  • Last Modified:

Understanding why Dell pre-configured RAID 10 has one large disk group but 5 logical drives?

Hello :)

I have a Dell PowerEdge R530 with PERC H730 RAID controller which contains 8 physical 1.88TB 7200RPM SATA drives.  I asked Dell to preload it with RAID10, which they did.

My expectation of RAID 10 with my physical capacity would manifest with 14.4 TB of physical storage, but with 7.2 usable and one equally large logical volume using that space.  IE RAID 10 is two nested RAID 1's each sub array with 4 physical drives total, inside a larger RAID 0 spanning the two pairs of logical RAID 1 drives all together.

So what I actually see in the pre configured Dell RAID 10 are 5 virtual disks.  4 Virtual disks are 1.8TB and the 5th is a 77GB.  So two questions...

1.  Are Dell "Virtual" drives the same as RAID logical drives, IE the 4 nested RAID 1 A,B,C and D drives spanned RAID 0?  If so, I am guessing there is no way to present these 4 drives as 1 larger 8 TB logical drive like you would be able to with RAID 5?

2. Also, what is the purpose of the 5th 77GB virtual drive?

Thanks for the info.
0
CnicNV
Asked:
CnicNV
1 Solution
 
max_the_kingCommented:
Hi,
doing some maths it seems it is right what you requested:

4 virtual drives * 1,8TB = 7,2 TB

which is right what you expected to have.

The fifth drive (77 GB) must be just spare space left from Raid.

You may want to contact Dell to confirm this though, just to be sure.

Meanwhile here is some info: http://www.dell.com/support/article/it/it/itbsdt1/SLN111362/en

max
0
 
QlemoC++ DeveloperCommented:
I cannot tell about 5 versus 4 logical drives, but maybe the 5th is a config partition.

The way you expect the RAID 10 to be built is bad. You mirror a stripe set. A stripe set fails if one disk fails. So your RAID would be dead if one arbitrary disk of each mirror set fails.
Better to stripe the mirror pairs. This way each pair (2 disks) has to fail completely to render the RAID unusable - which a probably near zero.
0
 
andyalderSaggar makers bottom knockerCommented:
They have kept the disk sizes below 2TiB in case your chosen OS does not support disks bigger than that. It'll only take a couple of minutes to delete what they have created and create one huge logical disk if that's what you really want, you can do that in the <ctrl>R bios utility.

Remember you'll need UEFI enabled to boot a disk that size and it'll have to be GPT formatted. Personally I'd have a small logical disk for the OS and a bigger one for the data as it makes managing it easier (and you wouldn't need UEFI as the boot disk would be less than 2TiB).
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
CnicNVAuthor Commented:
Thank everyone for the information so far.

Hi andyalder, yeah your explanation is looking the most correct within the context of my situation.  I am seeing something called "disk group 0" of which there is only one, and all of these 5 "virtual disks" reside or seem to be a sub-component of this.

So I am guessing that this disk group is the aggregate usable space of the entire RAID 10 array, and then these virtual disks are just logically carved up sub-portions that reside upon this to get around that greater than 2 TB limitation of certain OS's such as VMWare 5.5 ?

Just wanted to get a clearer understanding before I start installing ESXI on one of the virtual disk and subsequent VMs.

I would like to put ESXI 5.5 onto that 74 GB virtual disk, and leave the 4 larger virtual disks for VMs.
0
 
David Johnson, CD, MVPOwnerCommented:
Better to just put ESXI onto flash media/SD card
0
 
PowerEdgeTechIT ConsultantCommented:
1. No, RAID 10 is presented the same as RAID 5. All Virtual Disks are presented as separate "disks" in the OS.

I suspect you did not select GPT/UEFI during configuration of the ordered server. Without GPT, the largest "disk" Windows can use is 2TB, and without being installed on UEFI, Windows can't boot a GPT disk either. So, to meet your requirements (MBR/BIOS), they would have had to "slice" the array to be in chunks 2TB or smaller. 77GB is probably just a VD with what is left over.

If you want a single 8TB "disk", you will need to enable UEFI and reinstall the OS.
0
 
PowerEdgeTechIT ConsultantCommented:
Wow, sorry ... apparently I didn't refresh this page ;)
0
 
CnicNVAuthor Commented:
Thanks and sorry for delayed response :)
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now