Exchange Server 2007 disk configuration on ESXi 4.0 with local storage

Just wondering what would be the recommened setup for windows 2003 with exchange 2007 on an ESXi Server with local Storage ?

ESXi Server
Raid 1 - for ESXi
Raid 5 - for VM's

Now the question I have is, should I configure the Vm's (Windows 2003 and exchange) with a single partition on the Raid 5 (System, Data, and Exchange all on the one partition) or should I still partition as if it was a physical server ? (i.e. On the Raid 5 configure multiple partitions , C for system, D for Data, and E for Exchange)

Really just after the recommened setup windows servers for ESXi using local storage
pancho15Asked:
Who is Participating?
 
coolsport00Connect With a Mentor Commented:
Sure, just to be consistent. And, if you migrate to a different storage infrastructure, maybe with a SAN, etc., Conversion tools allow you to 'convert' (move) the VM's individual disks instead of all of them. You will have more flexibility. Now, for simplicity, it's easier with just 1 disk (VMDK), but I don't recommend creating it that way.

~coolsport00
0
 
Mike ThomasConsultantCommented:
Partition it as you would a physical server but with all volumes on the RAID5 as you suggested.
0
 
coolsport00Commented:
Well, in all honesty, it doesn't matter. ESX/i storage is different than Windows partitioning. On your RAID5, ESXi will see it as 1 piece of storage when you add it to your ESXi host. Yes, you can configure several volumes/partitions on a VM from that datastore, but performance won't be gained if you create separate volumes. A datastore is just 1 chunk of space, is a good way to think about it. Best Practices for Exchange say to have SGs and logs on separate volumes though. That being said, I would make separate 'disks' (VMDKs) like they were on a physical box.

Regards,
~coolsport00
0
Take Control of Web Hosting For Your Clients

As a web developer or IT admin, successfully managing multiple client accounts can be challenging. In this webinar we will look at the tools provided by Media Temple and Plesk to make managing your clients’ hosting easier.

 
pancho15Author Commented:
Coolsport00 - so you suggest creating separete disks (VMDKs ) even though they'll be on the same Raid array?
0
 
DavidConnect With a Mentor PresidentCommented:
Actually you can do a great deal to optimize storage performance under ESXi, but there are some important missing variables.   Basically, you want to optimize for efficient and minimal I/O.

Regardless of the RAID level, or whether you use hardware or software RAID, ESXi is going to aggregate I/Os to the disk.  These I/Os are going to have a fixed size which may be longer or shorter than the native I/O size that eventually goes to the disk.   Remember that writes will always require the full I/O size, and may or may not be cached, depending on settings everywhere.

If the RAID system is configured for 256KB stripe size, then that obviously means that 256KB will be written to disk, regardless of what NTFS needs.  If NTFS needs 4KB of data, then ESXi will aggregate I/Os up to a point, and may ask for a different block size. The RAID controller may be required to read 256KB at a time, that is a function of the RAID controller architecture.  

Now in typical default config, NTFS will use 4KB I/O requests, SQL server will send out 64KB requests, and in this scenario, your RAID controller will do 256KB.   Let's say ESXi is 1024KB (it varies).   The type of RAID whether 1 or 5 affects back/end disks, let's take RAID level out of the equation because that makes it too complicated with so many other variables.

If you do no tuning, and do nothing more than create a 1 byte file, then you are going to read/write 8-16MB worth of data on a typical configuration (which would then result in a much larger number by the time it gets to physical disks, which is a function of RAID level, buffer cache, and # of disks in a RAID set)

1) At minimum, NTFS has to do 2 reads to figure out where to put the data.  NTFS is working at 4KB block size, and no way will the information it needs be within a single 4KB section, or 2 adjacent 4KB sections,  so NTFS asks for 2 x 4KB reads.  ESXi  will then generate 2 x 1024KB reads.  (Or 1 x 1024Kb if data is in same area, but this is unlikely), now the RAID controller, which does 256KB, has 2 x 1024KB requests, so it now has to do 8 I/Os, which are 2 sets of sequential I/os).   At all levels, reads can be cached, and unfortunately, the disk drives, RAID controller, ESXi, and the O/S all will do the caching.

2) Now that it has read a few MB worth of data to figure out where to put things, it has to write the data.  There are no shortcuts with writes, and both the filesystem needs to be written with the new directory entry, the NTFS journal needs to be written, and then data has to be written.  Writes always have to go to disk, and while they can be cached to delay the inevitable, and hope for some aggregation, they have to get done.   So now you have at minimum 3 x 4KB ==> 3 x 1024KB ==> 3 x 4 x 256KB ==> (then twice that if RAID 1)

So if you get a spreadsheet, and start playing with the native I/O sizes of NTFS, SQL, ESXi, RAID, then you can see that these things matter.  You need to get everything on the same page in terms of I/O sizes.
0
 
pancho15Author Commented:
thanks
0
All Courses

From novice to tech pro — start learning today.