Link to home
Start Free TrialLog in
Avatar of cansib
cansib

asked on

Hard drive configuration for ESXi installation

I have a Dell PE R710 server with 8 x 300GB hard drives.  I am going to install ESXi onto the server and run several VM's.  How should I configure my hard drives?  I am hearing that RAID5 can really affect performance, so RAID1+0 is recommended.  I am also hearing you should have a seperate array for your OS, but wouldn't that be a huge waste of space.  I heard if you have all on the same array, then you are limited to 1MB block sizes with a 256GB max hard drive capacity, is that true?  Are they saying each VM will be limited to 256GB, or each hard drive partition within the VM's, or that you will only have 256GB of space to allocate to all of your VM's?  Please help!

Thanks.

Mark
Avatar of Neil Russell
Neil Russell
Flag of United Kingdom of Great Britain and Northern Ireland image

Go with Raid 1+0.
See discussion here about limits
http://communities.vmware.com/message/1001312 
RAID5 doesn't hurt performance; I have several high I/O VMs on RAID5 and it's fine. That being said, a RAID10 is indeed better. But, 1. you probably won't notice a difference, and 2. you lose an extra disk.

You may have seen to install the hypervisor on separate disks because you typically want to separate it from your storage...in case you need to mess with your ESXi install. That way no harm comes to the VMs. It's just best practice, but isn't required. Since you're using ESXi, you don't need to use your disks, but you can use USB sticks to install it on. That frees up your HDs. You have the option to change the block size of your datastore, but off-hand, I can't recall if you're able to do so during the install if you install the hypervisor on the same disks as your datastore. Even if not, you can remove (delete) your datastore and re-add it with the proper block size you're requiring. What the block size determines is the max virtual disk (volume) that can be allocated to a VM.
1MB => 256GB max
2MB => 512GB max
4MB => 1TB max
8MB => 2TB max

Regards,
~coolsport00
RAID 5 works fine for me in all my VMware servers. As coolsport00 indicated formatting of the datastore blocksize will dictate the maximum size for a single virtual hard drive (file) on the datastore.

RAID 5 suffers a small penalty on write performance as compared to RAID 10.
I always recommend configuring a hot spare in any RAID setup - that would leave you with an odd number of drives, not conducive to RAID 10

I would recommend 7x300 in RAID 5 plus one hot spare. That would give you 1.8 TB usable

If you went 8x300 in RAID 5 you would be at 2.1 TB, slightly over the 2TB limit for ESX datastore

If you went 8x300 in RAID 10 You will have 1.2 TB usable

In any event be sure your RAID controller supports "write back" caching for writes.

Good Luck
Avatar of cansib
cansib

ASKER

and where does the ESXi install go?

I don't really want to rely on a memory stick just because I'm worried it would die sooner than a hard drive would.

bgoering, you said you'd recommend 7x300 in a RAID 5 config with a hot spare, so in that example, you're then saying to put ESXi on that same array, as I would have no other array to put it on.  So with that, I would have a 1MB block size, with 256GB of maximum allocatable storage for each VM, correct?  So, if I did it that way, and I created a Windows 2003 server VM, the maximum hard drive space I could give that server is 256GB, which could be done all in one drive or in 2 drives, or whatever, but added up it won't be more 256GB total space for that VM.  I'm just trying to make sure I understand this.  Sorry.

Mark
If you were to use a USB stick, the install would go on that. The footprint for the install is fairly small. A 2GB would be all you needed. You can make a simple copy of the USB for failover. Now, it's not like a RAID1 where if you have 1 disk fail, you're still up and can replace the failed drive. There is slight downtime...the time it would take to unplug the failed USB and plug in the copy. If it's after hours, the recovery time would obviously be longer.

~coolsport00
Avatar of cansib

ASKER

I see what you're saying coolsport00.

But how about this, if I create 2 arrays, one that's 2 x 300GB mirrored for the OS, then one that's 6 x 300GB in a RAID5 (or 5 x 300GB with hot spare), couldn't I still allocate the leftover space to a VM on the OS array?  So that one VM will have the 256GB limit, but I could have the other array with a 2MB or 4MB block size so I could have higher capacity VM's if I needed it.  What do you think of that?
Yep...you sure can; keep in mind the *only* reason behind my suggesting installing on USB is 1. to save a drive or 2 and, 2. to separate your ESXi install from datastore storage.

You're suggestion would certainly work fine.

Regards,
~coolsport00
A couple of reasons I would recommend the USB stick (which is the way I am running my ESXi) is that the install process to disk will sometimes wipe out the datastore when you go to install the next version. That is why I converted to USB. My ESXi is also on a R710 Dell, and there is an internal USB port in the front left corner (as you face the server) that is easily accessable.

Just get a good quality USB Media (or SD card) for the install. In any event if you lose the USB it is pretty easy to plug in another, re-run the install, and restore your configuration from backup. Wasting 2x300 drives for a fault tolerent ESXi Install disk is overkill (in my not so humble opinio) - Get a USB and add those to your 7x300 RAID 5.

Good Luck
Avatar of cansib

ASKER

So, when or where during the ESXi install (or is it during the VM install) do you specify the block size?
ASKER CERTIFIED SOLUTION
Avatar of bgoering
bgoering
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of cansib

ASKER

Well, I decided to go with 2 drives configured with RAID1 for the OS, and the rest in a RAID10.  I think I could have been just fine with the RAID5, but my bosses are 22 doctors and performance is a big deal to them.  For what we're doing, I think I'll be fine.  Thanks for the help!