• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 3070
  • Last Modified:

What RAID stripe size should our Dell MD3200i have in a VMware environment?

We are trying to decide what size RAID stripe we should have for our Dell MD 3200i
the box will only be used for VMs and will have a bit of a mixed bag when it comes to VMs, some with high IOs and some low IO VMs.

the box itself has 12x 600gb SAS disks in it and will be configured as RAID 10.

the default on the box is a 128kb RAID stripe, but i am wondering if i would benefit with VMs having a higher one like 512kb?

what are your expert thoughts?

We do have 4x Dell MD 3200i boxes which are all identical so we are planning on setting each up with different RAID stripe sizes for testing purposes so I will let you know what we find also...
0
Thomas_Wray
Asked:
Thomas_Wray
  • 4
  • 3
  • 3
1 Solution
 
DavidCommented:
The most efficient stripe size is going to match the I/O size that VMWare requests.  So if VMWARE reads/writes 1MB at a time, you want to configure the disk so that the RAID controller reads/writes 1MB at a time.

As such, determine how VMWARE is configured, and  go from there.
0
 
Thomas_WrayAuthor Commented:
Do you mean the block size that i have chosen on the VMFS datastore?

any easy way to find out what its requesting otherwise?

thanks
0
 
andyalderSaggar makers bottom knockerCommented:
VMware I/O block size just follows the I/O block size of the OSs that are run on it so it's that that matters and that depends on the applications running ontop of those OSs. In general it's impossible to determine because there are many apps and OSs all hitting the same storage so I'd take the defaults.
0
Restore individual SQL databases with ease

Veeam Explorer for Microsoft SQL Server delivers an easy-to-use, wizard-driven interface for restoring your databases from a backup. No expert SQL background required. Web interface provides a complete view of all available SQL databases to simplify the recovery of lost database

 
Thomas_WrayAuthor Commented:
thats a good point about the defaults. I have been running tests with IOmeter and changing the data sizes that it works with to 8kb seems to produce the highest IO's this is still with the RAID stripe set to its default Segment size of 128kb.
So the work inside of the VM does appear to have a direct affect on the SANs performance.
We are just wondering if VMs themselves are a sequential workload? someone who works for a storage vendor told us that VMDKs are sequential, but how can they be if changing the workload type on IOMeter has such a dramatic effect?
we have quite a lot of SQL servers on windows platforms so i assume the Segment size of 64kb is a safe bet for us, I will continue testing and get back to you.

However we do have some file server VMs so i assume these would be best kept on a larger segment size? perhaps these should stay as the 128kb segment default.


thanks for your input so far.
0
 
andyalderSaggar makers bottom knockerCommented:
Not sure what they mean about VMDKs being sequential, it's a file and the contents are sequential but that doesn't mean they are accessed sequentially. If you defrag an Exchange database the file is sequential but reads and writes to it are all over the shop.
0
 
DavidCommented:
Actually VMWARE aggregates I/Os unless you are using the vmdirect pass-through I/O that nails controllers and disks to specific machines.  It has to.   Also file systems block I/Os together for speed and efficiency.  This is done within the operating system and sometimes device drivers themselves.   Andy is correct in that determining the blocking is quite difficult.  The only correct way to do it is run software that you may or may not have that measures it at the physical drive level.

IOMETER does NOT measure what goes to disk drives.  It measures what is sent to the device driver in the local machine. What ends up on the disk drive involves VMWARE settings, device driver settings (within vmware), and within the local machines .. then it is further blocked, reordered, and resized by the RAID controller

VMWARE generates highly RANDOM I/O to/from the storage subsystem, so tuning for IOMETER as andy mentions will often result in much slower overall performance.  Unless  you spend some money buying decent tools to measure what you need, then the best thing you can do is use perfmon on the individual machines and monitor QUEUE depth over time.  This tells you how long on average that particular machine waits per I/O of any size.   The smaller the average queue depth (1 or 2) the better overall performance is.  

Just sample over NORMAL operating situations over time, per machine, and then look at changing file system settings and RAID settings.   It is a universal truth that the most efficient I/O is the one you DON'T do (meaning it is cached inside of the host machine).   So the more things that match, the more efficient.

I.e, if NTFS reads/writes 64KB I/O, and you are doing RAID1 then in perfect world, you want to read/write 64KB per disk at a time.  You will be dealing with I/Os of multiple sizes and different levels, so benchmarking can never nail it for you.  You can spend money hiring a pro to measure I/O at the physical drive and tune the RAID for lowest latency (time to complete an I/O basically) for the I/Os on a weighted average, but even if you do that, then a minor change in file system settings, or adding some cache memory changes everything.

So just take it slow, look at the NTFS I/O sizes and perfmon to see what the O/S is doing and look at latency then in parallel contact your RAID vendor and get some performance metrics based on various I/O sizes and mix of read vs writes vs random vs sequential, then try to get everything optimized for a weighted average.

No simple way to do it right, but the simple way to do it wrong is by artificial benchmarking and making assumptions about I/O performance w/o just taking the time to MEASURE them and learn specifications.
0
 
Thomas_WrayAuthor Commented:
Ok thanks for that reply, just out of interest on the IOMeter bit what about this forum post?

http://communities.vmware.com/docs/DOC-3961

I found it very useful and they are saying IOMeter is a great way of testing VM systems.

thanks
0
 
DavidCommented:
IOMETER is best when run on a non-VMWARE system at raw disk I/O to measure performance of a RAID controller.   Or it is best to measure performance of a file system when run in a single machine on non-RAID disk drives in a non-VMWARE.

But moment you start adding layers, i.e, file-based iometer to NTFS on a VM -> VMWARE -> RAID, then you really just see numbers that can't be used for tuning purposes unless you absolutely nail how your applications send I/O.  That is a near impossibility unless you are benching something like SQL server or exchange and understand your needs perfectly.

Don't forget, you are doing mostly RANDOM I/O if you have more than 2 or 3 VMs and aren't doing things like streaming video.  You need to tune for lowest latency and highest I/Os per second, not throughput.  If you have users running standard desktop apps, they want snappy performance when they move the mouse or change to a new screen, not delays for big chunks of data to copy that they'll never need anyway.



0
 
andyalderSaggar makers bottom knockerCommented:
It's useful to note that VMware themselves also use iometer - http://www.vmware.com/files/pdf/performance_char_vmfs_rdm.pdf
0
 
Thomas_WrayAuthor Commented:
Its a bit of an open question so a difficult question to answer, but the feedback i received has helped me to get closer to the answer I need.
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 4
  • 3
  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now