Reccomendation for Raid setup in virtual server

Were looking into our first virtual server setup. I have always configured servers with the OS on one array (usually Raid 1) and data on another array (either raid 1 or 5). I am not quite sure how this works in a virtual setup though.

1.      Would the Hyper-V management be on a separate array or virtual disk?
2.      Do virtual servers take up the same amount of disk space for the OS with each virtual server or are some things shared?
3.      Is it still a good idea to try and put the OS on one array and data on another, or does this change with a virtual environment?

In our case, we would be looking to put 4-5 virtual servers on the machine. Exchange 2010 (approx 100 users), Sharepoint, and 2-3 very low usage virtual servers on the same box. What would you recommend?

Option #1 – 1 Raid 10 array for Exchange & HyperV management and 1 Raid 10 for Sharepoint and everything else.

Option #2 – Raid 1 for Hyper V management, 1 Raid 10 for Exchange, 1 Raid 10 for Sharepoint, and a Raid 1 for everything else

Any other suggestions? Yes, cost is an issue, but we don’t want a slow server either. Using 2 Raid 10 arrays also would require a DAS box which adds a lot to the price. We need to allow room for growth though. We are trying to find a good balance between performance and price.
zefonAsked:
Who is Participating?
 
kevinhsiehConnect With a Mentor Commented:
To answer question #2, good practice is to make each VM independant of the others, so they don't share anything in terms of saving disk space.

In Hyper-V R2, "Dynamic and Fixed Disk performance at almost parity", so I stick with dynamic disks. There is no reason to tie up 40 GB of physical disk (not to mention trying to backup a 40 GB file) when the fixed VHD only has 10 GB of data on it.
http://blogs.msdn.com/b/tvoellm/archive/2009/08/05/what-s-new-in-windows-server-2008-r2-hyper-v-performance-and-scale.aspx

I am not exactly sure what question means when compared to question #1, but for your VMs, DO NOT partition your drive into multiple logical partitions. For your Exchange 2010 server, say that you want the OS on C, Exchange and database on D, and Exchange logs on E. In a physical server you might just partition the disk. It makes no sense to do that for a VM. In a VM, if you want 3 drive letters, add 3 virtual hard drives. This allows you to grow each VHD and partiton if needed without being blocked by other partitions on the drive.

In general, it's a good idea to separate the storage of your VMs from the host OS just like we do for SQL, file, and Exchange servers. You can use separate physical disks for the OS, or a separate partition of say 30-40 GB.

As for performance, you have a couple of options. Any idea how much disk space you need? For a "wild idea", put the host on a RAID 1 or RAID 10 along with the "low usage" VMs. Put Sharepoint and Exchange on a RAID 10 comprised of SSDs. That wold be really fast, and could actually be cheaper than buying a whole bunch of fast SAS drives, depending on the capacity you need.

The good news is that the Exchange team has put a lot of effort into reducing the IO requirements for Exchange 2010, such that a single 7.2K SATA drive can deliver all of the IOPS you need for your Exchange environment, so Exchange is no longer the IOPS hog it used to be, which just leaves you with Sharepoint.

The old school way of storage performance management is to put everything on dedicated spindles. That leads to a lot of waste because you would keep adding spindles to get to the performance you need, but you would only use a fraction of the capacity, and you couldn't use any of the free IOPS from the other spindles you have because they were dedicated to something else. The more modern way to manage performance is to do wide striping, which is to spread your data out among all possible drives. This allows you to have more spindles available for any given workload, and improve the overall performance. 3par does this to the extreme, and they just got bought by HP for 2.4 billion USD.

If you don't want to go the SSD route, I would make a single RAID 1 or 10 array with as much disks and capacity as you feel you need. For example, two 450 GB 15K SAS drives might be enough capacity for you, or you could look at 4 10K SAS drives in RAID 10, or even 6 drives.

A single quad core precessor onthe host should be more than enough, but don't forget the RAM. Exchange 2010 with all roles needs 8 GB, maybe 4 GB for Sharepoint, plus your other VMs. 24 GB might be a nice starting point. Once you start virtualizing, it's really easy to start adding more VMs. Just remember that you need to have Windows licenses for all of the VMs. A single Enterprise license allows you to have 4 Windows VMs on the hardware (at it's cheaper than 4 Windows Standard copies), while Datacenter allows unlimited Windows Server VMs for when you really start to virtualize. It pays off around 8 VMs, as I recall.
0
 
Schnell SolutionsConnect With a Mentor Systems Infrastructure EngineerCommented:

Hello,

Like we know a big consideration for virtualization are the I/O requirements with the storage. It is possible to put many VMs in the same LUN, but in this way we will share the I/O demand and one machine will affect other VMs performance. But the point is that we usually don’t have budget for using one LUN or more per VM

There are some considerations where you can share the amount of disk used by more than one VM. But it is not recommendable for a production environment, it is just used for very simple labs. In these cases you create for example, a common disk with windows server 2008 installed for the VMs, and from here we create a separate VM file for a DC, another one for a SQL, another for Exchange, etc. But don’t consider this function like an option because many limitations that it has and additional effects in your environment

I recommend you to focus your storage LUNs / Raid arrays based on two possible options (one of them)
1.      Organization by Categories: You can distribute the different kind of files/paths/locations based on categories. For example collocate all the VMs in the same array/lun in order to have a kind of standard, and so on. The problem with this configuration is that it usually doesn’t match with the performance distribution but is easier to administer because is more simple and more organized

2.      Organization by Performance Distribution: You can create the maximum amount of LUNs/Arrays that you can (In your case it looks that they are two) and distribute your different elements based in a kind of Organization manner, after you finish use performance counters relate to disks and measure the statistic differences between your LUNs/Arrays and use the one that you notice that is less stressed for migrating/changing the elements of the other LUN/Array that is more busy. In this way you will raise a point where you will be distributed from the performance category taking more advantage of your hardware and not expending additional money and resources. The drawback of this configuration is that you will use folders/containers in the different arrays handling the same kind of files (Example: hdd files from VMs)


Another general recommendation, to use FIXED size disks on your VMs on behalf of Dynamic disks if possible. Dynamic disks have low performance compared to fixed disks because they usually need to expand and it takes many resources and usually fragment the disks in the host. The advantage of Dynamic disks is that they can be transported easier because they are usually little and they use less space. But always try to use fixed size disks or drive true disks



0
 
zefonAuthor Commented:
Awarding points to both as both of you gave me some great info. One has more points than the other because the info was more useful for me in my specific situation. Thanks.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.