Link to home
Create AccountLog in
Avatar of J.R. Sitman
J.R. SitmanFlag for United States of America

asked on

What size RAID5 do I choose for new Dell T410 server

I'm building our new Dell T410 and this is the 1st time I've configured RAID5.  The current size is 408.38gb.  Does this size have anything to do with the physical drive size, which I want to be 100gb?  Can RAID5 size be changed later?
I'm installing 2008R2 64it
ASKER CERTIFIED SOLUTION
Avatar of Alan Hardisty
Alan Hardisty
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
What size is the drives in the server?  How many drives do you have?  One of the best practices in the enterprize is to install the O/S on a raid 1 and then put all the data on another physical raid 5 or 6.  Depending on what drives you have will depend on what you can do.  The main reason for this setup is in case the O/S gets corrupt, it is easy to repair or reinstall and not have to worry about loosing the data for the applications.   If you are limited in the number of drives you have, a 100Gb partition as mention above would surfice as long as when you install the applications you point their data storage towards the second larger partition that you will have.
Avatar of J.R. Sitman

ASKER

There are 4 146gb drives.  I've got a company that is going to be installing Hyper-V later next week and they told me to just use RAID5 and mentioned nothing about installing OS on RAID1, so I think I should stick to their configuration.
SOLUTION
Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
SOLUTION
Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
Can a hot spare be added later?
Generally speaking Yes - it all depends on your RAID controller, but I would be very surprised if a Dell one cannot.
@sifuedition:   I very much appreciate your explanation.  Very helpful
Ok since you have 4 146gb drives, you can configure all of them in a Raid 5 or use 3 of them and use one as a hotspare.  With that being said, you will loose drive space at the rate of 1 drive for parity and 1 drive for the hotspare.  The parity drive you can do nothing about it is just how a raid 5 works.  However, with the hotspare, in case of a hard drive failure it will rebuild automatically.  If you cannot afford the overhead, use all of the drives to make your raid 5 but have a cold spare sitting near by in case of failure.

Also I would find out from the powers that be in your company what type of redundancy or data protection they are looking for.  That information alone will let you know what is the best direction to take this configuration.
My suggestion is to create a RAID5 set of 100 GB for the OS, and then a second RAID5 set with the remaining space for the data. This way Windows sees 2 physical drives and you can avoid partitions. Bad news is that it requires deleting your existing RAID configuration and all data on the drive.
I just looked on Dell website, did you get this server with hotswappable drives or cable drives?  If you have the hotswappable bay, then you can add the hotspare on the fly using the storage manager software that come with a dell.  If not, you will have to power down the server to add the hotspare if you are using cabled drives.
If any of you can continue to help, I'm at the Virtual disk properties.  I've set it for Array disk "ALL" Raid level 5 and should I leave the "Assign an additional array disk as dedicated hotspare" unchecked?
Having a Hotspare is good news - but your available space will drop down to about 192Gb as you will lose 1 drive for RAID overhead and 1 for the hotspare, so technically, you will only have the data capacity of two drives.

You can extend the array later by adding more drives and then adding the disk space to the array, then use that space, so if you want redundancy built-in - add a Hot-Spare.

If you need more than 100Gb of OS partition and more than 92Gb of data space, you can't add a Hot-Spare at the moment.
If you have a T410, chances are your controller supports a RAID 6 - I'd much prefer to see you run a RAID 6 to a RAID 5+Hot Spare, as you won't have to wait for your HS to rebuild - it will already be a participating member of the array.  Downside?  Just like with RAID 5+HS, you have only two-disk capacity.  You could also go with a RAID 10, possibly giving you increased disk performance, with up to two-disk fault tolerance, and two-disk capacity.  If you need the extra space, go all four in a RAID 5.  You can always add drives to a RAID 5 and RAID 6 later, as any PERC in a T410 will support Reconfiguring to larger arrays by adding disks.
@PowerEdgeTech:
I went with RAID5 and will add the hotspare later.  The OS is installing.  Since this post has gone beyond my original question, I'm going to increase the points to 500.

Thanks to all for helping during this installation.  I'll post soon.
Well I did something wrong.  Drive "C" is only 408gb.  What portion did I miss?  I have to start over, correct?
Correction, I made "C" too large so I can use the "Shrink Volume" ,correct?
In 2008, you can "shrink" the C drive.  Go to Disk Management (right-click Computer, Manage / right-click on the C: drive).  If you can't shrink it enough, then you'll need to start over.  At the screen where is asks what disk/partition you want to install to, click Disk Options, delete everything, then Create a new partition of 100GB (or whatever size you want C: to be), then install to it.
I opened the shrink volume and it doesn't look like I can shrink it to 100gb.  The most it will let me is 208974.  HELP
You can shrink the volume, but then you are dealing with partitioning your drive, which I don't recommend if you can avoid it. I suggest you start over and create a single RAID volume of 100 GB, install the OS, and then create the RAID volume for your data.
Crap. looks like I'm starting over
Shouldn't take too long.
And it's good experience. Better now than to have everything done and then find out you need to start over.
To clarify when it asks for the RAID5 size I should select 100gb and not 408 which was the default?
Are you creating multiple VD's across the disks for your "partitions"?  I would create a single VD, then partition in Windows.
Correct. Select 100 GB. It will create the largest one possible by default.
The company installing Hyper-V will be creating at least 4 Virtual machines.  I'm back at the screen for RAID configuration and it states 408.38 is the size,  Do I change or accept?
@PowerEdgeTech, my suggestion is to create multiple RAID virtual disks. The reason for this is that it protects data from bad OS installs and partiton table corruption, and it makes it possible to grow the virtual disks. If you have one large virtual disk that has been partitioned 3 ways, have fun growing that first or second partition!
While I can appreciate the value of isolation, I believe the potential headaches of "slicing" the array makes that type of isolation meaningless.  You say "good luck growing the first or second partition", but at least it is possible if its ever needed.  If you create multiple VD's across the same set of disks as your "partitions", it is physically impossible to grow any of the partitions.  Then there is the question of array instability - have you ever seen multiple VD's across the disk not all rebuild after a disk fails?  (I'm guessing not, since you suggest this method :))  Unless you are running frequent Consistency Checks on all arrays (which you should be doing anyway), there is a good chance that if a disk fails and all three VD's are showing degraded, that not all three will rebuild when a new disk is inserted.  That headache alone is enough for me to recommend otherwise.  I don't mean to hammer on you ... just my two cents based on work with hundreds of Dell servers over the years.
@kevinsieh There are a lot of potential issues when you have multiple raids spanning the same physical disks. Rebuild behavior becomes unpredictable and you lose a lot of the stability that the raid was intented to provide. Frequently, rebuilds, offline drives etc work just fine in that config but who wants to be a statistic? They are more prone to error than if the raid consumes the entire disk.
I wholeheartedly agree with PowerEdgeTech and would never create multiple virtual RAID arrays.
I'm sticking with one VD.  I've started over.
For what my two cents is worth, I like PowerEdge Tech have worked on hundreds of Dell.  I fully agree with what he said.
I've never had the RAID rebuild problem described, but I will be changing my practices and recommendations going forward. Fortunately, most of my systems are VMs on SAN storage so I I have little need to have multiple volumes on physical RAID sets going forward.
I got to the point of where to install the OS and selected "new" and created a 100gig partition.  This is the step I missed last time.  DUH!
I've had to restart many installs for that reason ... it is easy to click right on through :)
Slicing Arrays in the PERC controller is supported on anything later than a PERC5. Prior to Perc5's (basically with the SCSI controllers) there could be an issue where one virtual array would not rebuild on a disk failure / replacement. There have been no issues that I have seen with the PERC5 and newer controllers.
Not my day.  After activating windows and setting static IP, I rebooted and got BOOTMGR missing.  Called Dell and I'm starting over
Granted, they were more prevalent on PERC 4's and earlier, but I've seen it on PERC 5/6 as well.  It is "supported", as in the controller can do it, but Dell RAID engineers do not advise it even on the PERC 5/6.
You can usually fix this by booting to the Windows DVD and choosing Repair you computer (on Install page).  May not hurt to have a voice on the line though.
To be safe I'm starting over.
@PowerEdgeTech
Where / When have you seen issues with the PERC 5/6 and sliced arrays? I have been working with those controllers for years and monitored tens of thousands of cases and have not seen an issue to this point with sliced arrays (on those controllers).

Im not saying that you havent seen it - just curious for my own information.
Several people logged issues with sliced arrays in EE this month, for what it is worth, brent.

First hand ... when I took over where I am now, that's how the old IT guy had all of the 2950's setup - "partitioned" by slices instead of by Windows.  Of the many 2950's we have (all configured identically), we have had to rebuild 3 of them because of drive failures that, when replaced, would not rebuild both VD's.  Granted, the maintenance was in complete disarray (for example: 7 servers had PF drives, 4 had drives that were failed, and 4 more had amber LCD's with various errors going back months - unaddressed).  (I'll grant you that, with such poor maintenance, this is not a good indicator of the PERC's performance/capabilities as a whole, only that it does happen.)

Also, when I worked for Dell I did actually see a couple issues on a PERC 5, but more to the point, we received a "best practices" communication on the PERC 5 (and again later on the 6) from the Dell RAID engineering team which advised against it.  Have only seen it myself on the 5, but have operated according to Dell's advice on the PERC 5 and 6.
Sliced arrays have been problematic for years.  And even though most Raid controller manufactorers say it is supported, many in the business would never think about slicing the array because of potential rebuild problems.  Given that, it comes down to whether or not you a comfortable with the fact that a sliced array could extended the time it takes to be back fully online.  Because in the event of a failure, the latency in retrieving data from the degraded array goes up.   Extending that time can make for a very long day of phone calls from users crying about a slow network.  I for one do not believe in sliced arrays.  I have just had to many bad experiencing with one of the slices not rebuilding after a failure.
3rd times the charm.  I'll spread the points out later.  

Thanks for all the help.  Everyone have a Happy New Year
No problem ... Happy New Year!
The only time that i have had an issue with rebuilding a sliced array was when there was a "double/mulitple fault / punctured stripe" on the raid array (which is many times caused by predicitive failure drives being left in the system). This would cause an issue with a drive rebuild no matter whether it was sliced or not. So, I understand that many may be having issues with a particular Virtual Disk not rebuilding but, when you deep dive into it (controller/firmware logs etc), it would have been the case if it was created as one Virtual Disk the whole disk wouldnt have rebuilt anyway.

Although... I still agree that this is only something you do as a "workaround".

I was just looking for some feedback to see if there was something going on that I was not aware of (something I needed to look into). Thank you to all who responded.
Thanks to all for your help on the original question and the additional information.