• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1548
  • Last Modified:

VMware Degraded I/O with degraded virtual disk

We have ESXi 5.0 running on a Dell PowerEdge R510 host with PERC 6/i RAID controller.  Eight physical disks configured as two virtual disks - one RAID 10 and one RAID 5.  Incidentally all 500 GB SAS (7.2K) drives.  VMware is installed on a flash drive.

We had a drive go into predicted fail a couple of weeks ago.  Initially not much impact at all, but it seems the drive has deteriorated further and although still not failed, disk I/O for the entire server has slowed to a crawl.  We had a web server VM hosting a small website with an instance of SQL Express and the website would timeout in most database connection attempts.  This VM was on the healthy RAID 10 VD.

The question is why would VM's on the other RAID 10 virtual disk be impacted by degraded state of the other virtual disk?  

In doing a little research, I read that if a "predicted failure" drive has a significant number of bad blocks, I/O performance can degrade while those blocks are marked bad.  So, we "offlined" the disk in question (reluctantly, knowing the risks) thinking that the drive was pretty much failed anyway.

That was more than 12 hours ago and still abysmal I/O on the entire server.

We have a replacement drive scheduled for delivery today, but I'm wondering if we need to be prepared for further corrective action.  Is this expected behavior or does it indicate further issues?  

We have Dell OMSA installed within ESXi and no other trouble is reported by the system.
0
gatorIT
Asked:
gatorIT
1 Solution
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
We've seen this, when RAID controllers, constantly try and read and write bad blocks.

This seems to take away overall performance from all good storage.

Get the disk replaced ASAP, when we see a disk go into predicted fail, we get it escalated to DELL/HP the very same day, for swap out, within 4 hours.
0
 
Nick RhodeIT DirectorCommented:
Although different raid configs they are operated from the same controller.  Regardless of the defective disk (get replaced right away anyways) you will probably see the I/O error for I believe the PERC 6/i controller does not have a write cache so it reads and writes at the same time.  This will cause a performance issue which can be seen inside the vms with what seems like a short delay.  I recommend the H710 or higher for the PERC which has a write cache to resolve that issue.  After replacing the drive give a call to dell, they will most likely recommend a better raid controller to resolve those I/O issues.
0
 
gatorITAuthor Commented:
PERC 6/i has write back cache and battery.  It's the SAS 6i/r that does not.  

Drive has been replaced, I think this will be the last RAID 5 array we ever use.  72 hours in we're still only at 75% in the rebuilding process.  The extra bit of storage from RAID 5 just isn't worth the performance hit and time to rebuild a small (relatively) 1.5 TB array.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now