Link to home
Start Free TrialLog in
Avatar of Paul Cahoon
Paul Cahoon

asked on

Server HD Read Write Times

I am truly at a loss and hope that someone can explain this problem to me.  I have a SuperMicro server with 4 AMD Opteron 12 core processors.  It also has 64gb of DDR3 ECC memory.  It is running Windows Server 2012 R2.

I am running hyper v with 3 virtual 2012 servers and 1 linux proxy gateway.  I also typically have a Windows 7 VM and a Windows XP VM running on this host.  I have had 2 or 3 other Windows vms running at various times with no problem.

Just this morning I had an issue with the Linux gateway not running correctly.  Eventually (with some help) I found out that the HD access speeds were dropping from 500-600 mb/s range to 10-30 mb/s range.  I discovered this using DD command from the Linux command line.  I began testing the HD speeds on the host using CrystalDiskMark and discovered that I was only getting somewhere in the range of 40-120 mb/s on the physical Hard Drives from the host OS.  Keep in mind that I am using standard Sata drives 7200rpm.

What I don't understand is how I could be achieving better speeds on the VM than the host OS can get.

Also, I don't understand what could be causing the rate drop in the VM.  The one thing I discovered is that the Linux VM seemed to hold its higher speeds until one of the Windows Server VMs was turned on.

I also purchased a new Sata drive and installed it and discovered that I was getting about 50-75% better transfer rates on that using CrystalDiskMark from the host.  I honestly expected a much bigger increase than that.

After the single Windows server VM was completely started up I reran the tests on the host drive that contains both VMs and I got a range of 90-120 mb/s.  I then ran a test on the Linux VHD using the DD command and it was now back up in the 400-500 mb/s range.  I also ran the same CrystalDiskMark test from inside the Windows Server VM and was getting 130-150 mb/s range.

I'm just looking for some kind of explanation for what might be going on here.  I have been running VMs on these types of HDs for years and have never had these kinds of issues.  Could there be something else going on here?
Avatar of pgm554
pgm554
Flag of United States of America image

What's the RAID controller?
If you don't have one that does caching ,performance in the VM world will suck.
Avatar of Paul Cahoon
Paul Cahoon

ASKER

Here is my server: http://www.supermicro.com/Aplus/system/1U/1042/AS-1042G-TF.cfm
It has the AMD SP5100 Sata controller in it.

What I don't understand is if this is the problem, why is it waiting until now to surface?
A lot of factors could be causing the issues including vm configuration.  Are you adding scsi devices or sata devices for hyper-v virtuals machines ?  Also look at the processor configuration of the virtuals. Dell wrote an article a few years ago about cpu configuration and the impact of adding more Vcpu's, and then memory config and total number of vm's  etc.  Make sure your Raid drivers are up to date, are you using thin or thick provisioned drives etc.  When troubleshooting this kind of issue on a hypervisor always go back to the basics.

Not a huge fan of hyper-v due to heaviness of the windows operating system, VMware just seems to perform better overall.
Let me guess, software based RAID 5 though the page says RAID 1 for that chipset RAID.

A RAID 1 pair of SATA disks are limited by disk throughput so probably ~60MB/Second at best and then around 200-250 IOPS maybe.

That's not a good setup for a virtualization host.
Get a real hardware RAID controller with battery backed or SSD cache.

Given the choice of a nice caching HW RAID  controller or the 2 extra CPU's ,I would go RAID  controller in a minute.

https://www.adaptec.com/en-us/products/all/
My problem is that I don't have a spare expansion slot to put a raid controller card in.  If I were to find a workaround for that and installed a good raid controller, would I need SAS drives or would standard SATA be sufficient to give me the performance I need?

I am not currently running any raid so wouldn't setting up my existing controller give me some improvement?

I'm also considering using the Storage Spaces feature within Server 2012 to store my VMs on.  I know this may not be ideal but on a limited budget it should give me some improvement over what I am currently running which is just straight SATA drives with no redundancy or I/O sharing.

Any thoughts?
Storage Spaces is not such a good idea in a standalone host setting. A hardware RAID controller with 1GB of NV/Flash backed cache is best.

You could then run a RAID 5 setup on the three drives and eek out a bit of performance.

The process would be to back up the VMs and down the server. Then after installing the RAID controller setting up the host OS on logical disk 0 (75GB) leaving the balance for the guest VHDX and configuration files.

My EE article Some Hyper-V Hardware and Software Best Practices.

The Intel Server Systems we work with have an I/O module that we can plug directly into the mainboard for RAID. Does this one support something like that? It would not take up a slot.
Raid 10 is your best performance factor for writes / Raid 5-6 will give you your best performance for reads.  I like raid 5 setups for cost, but  these days everyone seems to use Raid 10 as the density of the new disks are so high.  You can also purchase a Synology or QNAP ( NAS device) and use ISCSI connections for another data store increasing throughput by distribution, but you are limited by bandwidth.  With no raid controller I may go that route i use ISCSI for all my datastore connection and they perform fine with 50 users in large format printing environment.
We only run with RAID 6 with hardware RAID and 1GB flash backed cache. Write-Back is enabled by default.

In our experience with 8 or more 10K SAS spindles the RAID 6 array performs virtually on par with the RAID 10 without the storage cost of 50%.

Plus, in a RAID 6 setting we can stand to lose two disks. This is not the case with RAID 10 depending on how things go. We lost a server a while back that had a RAID 10 setup when the drive that died had its pair die about 3-5 minutes into the rebuild. :S We don't do RAID 10 anymore.
It just seems like storage spaces is a good alternative to raid, especially in a low demand environment such as mine. I know that goes against what most Administrators have done for so long. On the other hand, doesn't the drastically lower cost and simplicity of recovery have some merit? I have personally tested being able to pull a single drive from a mirrored storage pool and just connecting it to a laptop and being able to access everything on it.
Having spent thousands on clean room recovery from failed raid scenarios this seems very appealing. Am I missing something?
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial