We help IT Professionals succeed at work.

8 drive enclosure RAID 5 ESXi 5.5 datastore poor performance

JoeBarbone
JoeBarbone asked
on
195 Views
Last Modified: 2016-01-25
Hello Experts,
This kind of took my by surprise. I am running ESXi 5.5 on the following:

ASUS CM6870
i7 - 3770
32GB RAM
2TB Hitachi 3.5" drive
1TB Seagate 3.5" drive
500GB Seagate 3.5" drive

ESXi 5.5 runs pretty well considering my lack of spindles. I only run into issues when I have a handful or more VMs running from the same drive. I thought more spindles should make things run better overall and I looked into a two drive Synology or other NAS type of device but I figured local storage would perform much better than being connected through the network (even at gigabit).

I got a good deal on 8 2.5" Seagate Thin 500GB 7200 SATA III drives and purchased an ICY Dock 8 bay SATA enclosure. I had an IBM ServeRAID MR10i card hanging around and connected the 8 drives to it. I was hoping to use a RAID 10 config, but the card only had RAID 0, 1, 5 and 6 available so I chose RAID 5 and ESXi saw the datastore immediately. I used all 3.XTB available for the datastore and proceeded to create a Server 2012 R2 VM using an .ISO file from the Hitachi loaded into the "CD drive" on the new VM, and created the VM on the new 8 drive datastore. Much to my dismay, it performed TERRIBLY! It was much faster using the same configuration with individual drives, no RAID! I thought more spindles would make a HUGE difference in performance, but unfortunately, it's terrible. I'm sure there is some type of configuration that maybe I'm missing. The RAID card is SATA II but in reality, most don't realize true 6Gbps unless they are using SSD drives.

The RAID card does have a battery backup, but the battery appears to be dead so I am using write through (not write cache). I know this will have an effect on performance, but good grief! I can't imagine it being that severe.

On a single drive when I load a new OS, it quickly runs through the menu items "copying files", "setting up windows", etc but this time it took what seemed like forever. I can normally setup a server, login for the first time, install VM Tools, change the IP and name and reboot by the time this was half way through copying the files (second step).

Does anyone have an idea? I specifically got 7200RPM 6GB drives thinking that this will be the best scenario compared to SAS, but man is it awful. I didn't expect them to be as fast as SAS drives, but I absolutely expected better performance than that of a single drive. Any ideas would be greatly appreciated.

I'm still looking into the card to figure out if there is a firmware update, but for those keeping score at home, the RAID card is an IBM ServeRAID MR10i running firmware v1.40.262.1180.

Thanks in advance!

One more thing, this is just a machine I play with in my home lab. There are no "production" devices or VMs on it. I use it to learn VMware and test different server scenarios, etc.

Thanks again!

Joe
Comment
Watch Question

Mal OsborneAlpha Geek
CERTIFIED EXPERT
Commented:
This one is on us!
(Get your first solution completely free - no credit card required)
UNLOCK SOLUTION
Andrew Hancock (VMware vExpert PRO / EE Fellow)VMware and Virtualization Consultant
CERTIFIED EXPERT
Fellow
Expert of the Year 2017

Commented:
Cache makes a huge difference!

and when you've enabled it, configure as 75% write, 25% read.
Mr Tortu(r)eSystem Engineer
CERTIFIED EXPERT
Commented:
This one is on us!
(Get your first solution completely free - no credit card required)
UNLOCK SOLUTION
VMware and Virtualization Consultant
CERTIFIED EXPERT
Fellow
Expert of the Year 2017
Commented:
This one is on us!
(Get your first solution completely free - no credit card required)
UNLOCK SOLUTION

Author

Commented:
Thank you all for the comments.

I understand that no battery will be slower, but this is down right unusable, I don't think the lack of battery is causing this much of an issue.

Wouldn't RAID 6 be slower than RAID 5 due to the additional parity?

I'll try blowing it all away and using RAID 0 to see if performance is better. If not, I'll look into flashing the card for RAID 10 (which is what I wanted originally) or just bite the bullet and get an m1015, I know they are quite a bit more flexible from what I read.

Thanks again!
Andrew Hancock (VMware vExpert PRO / EE Fellow)VMware and Virtualization Consultant
CERTIFIED EXPERT
Fellow
Expert of the Year 2017

Commented:
I understand that no battery will be slower, but this is down right unusable, I don't think the lack of battery is causing this much of an issue.

YES!

RAID 5 versus RAID 6 marginal, but RAID 5, only gives you one disk, and that's all!

Here you go especially for you, I remember this Question on EE a few years ago....

https://www.experts-exchange.com/questions/27269019/Why-slow-disk-to-disk-copy-in-VMWare-ESXi-server-4-1.html#a36417018

Oh My God

What a difference !    

Both copies of 12 Gb now down to 3 mins (previously 18 and 25 mins).  
And that's from NFS to local datastore

Local to Local - just 2.5 minutes.  Amazing

Thanks Hancoccka

Read the original question, because he also did not have a battery backup write cache controller fitted...
Carlos IjalbaIT Systems Director
CERTIFIED EXPERT
Top Expert 2014
Commented:
This one is on us!
(Get your first solution completely free - no credit card required)
UNLOCK SOLUTION

Author

Commented:
Thank you everyone for your input. I cannot test the cache without a battery and if I'm going to spend money on something I will go ahead a buy an M1015 and go from there.

Thanks again!
Unlock the solution to this question.
Join our community and discover your potential

Experts Exchange is the only place where you can interact directly with leading experts in the technology field. Become a member today and access the collective knowledge of thousands of technology experts.

*This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

OR

Please enter a first name

Please enter a last name

8+ characters (letters, numbers, and a symbol)

By clicking, you agree to the Terms of Use and Privacy Policy.