Dell PowerEdge T310 + ESXi v4 Lost acces to volume due to connectivity issues.

We have a Dell PowerEdge T310 running ESXi v4.0 and two production VM's (one Windows Server 2003 and one Ubuntu Linux).  All of the storage for the server is local in 2 1TB SATA drives.  It's been running flawlessly for approximately 200 days (since installed) but, beginning yesterday, is starting to randomly go offline.  In the event log for the server, I see a series of messages "Lost access to volume <long number> (datastore1) due to connectivity issues.  Recovery attempt is in progress and outcome will be reported shortly.".  This error is showing up at random intervals every few minutes on the server all of a sudden.
cybertechcafeAsked:
Who is Participating?
 
cybertechcafeConnect With a Mentor Author Commented:
Ok, the initial problem of not having connectivity to the hard drives seems to be behind us.  At the end of the day, nothing was really done to *fix* the problem, it just started working again.  We did find that something (still trying to find out what) caused the RAID array (mirror) to degrade and, I suspect, that degraded array was a big part of the problem (understandably very slow while it was attempting to rebuild the array).
0
 
cybertechcafeAuthor Commented:
Googling now but, is there any way to get to the service console remotely (e.g., without having to have hands on the physical console)?
0
Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

 
MrN1c3Commented:
You cant hit the service console remotely if its running esxi.  Do you have a DRAC card on your T310?
0
 
cybertechcafeAuthor Commented:
To be honest, I'm not sure.  I'm not terribly familiar with the environment (yet) and am still filling my way around.  Looking at everything else though, I suspect that the answer is no.  If that's my only option, looks like it's time for a site visit.
0
 
cybertechcafeAuthor Commented:
I believe that a site visit is going to be my best option here (there are obviously a few things that I need to discover about the site).  My plan at this point is the following:

- Check to make certain that the box has the latest BIOS
- Check to make certain that the firmware is up-to-date on the box
- Start it and see if we still see the errors (a lot of what I'm seeing seems to indicate that this is either a hardware issue or a firmware issue.  Since it has been working well for so long and, to my knowledge, there have been no changes, I fear that it's more likely hardware than firmware, but I am hoping)
- If the errors are still there, head down the road below
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009557
- My only concern with the link above is that it seems very specific to shared storage and fibre channel, but this is an on board RAID controller (Dell / PERC) and not a SAN or NAS device.
0
 
cybertechcafeAuthor Commented:
The drives on the server are a mirrored.  We have another ESXi server available that we can use as a stand in while this one is down.  I would like to be able to copy the VM from the semi-dead ESXi server to the stand-in-server but am unable to do so from the datastore browser (keep getting I/O errors).  Is it possible for me to remove one of the drives and, using a USB drive cage or something, mount it in something like Linux and just copy the files to the other server?  Will Linux be able to see the VMFS?

Removed reference to illegal CD.

rindi,
EE ZA Storage
0
 
cybertechcafeAuthor Commented:
One other thing that I just noticed is that, from Host -> Configuration -> Health and Status, there is a Warning and the status of the drive controller seems to be flapping (unknown / normal).
0
 
cybertechcafeAuthor Commented:
Also, the box has dual power supplies and the status of both is 'unknown'.  Do not know if that is normally 'normal' or if 'unknown' is typical.
0
 
cybertechcafeAuthor Commented:
Ok, just an update.  We arrived on site to begin the [long, arduous] process of recovery and rebooted the server a couple of times in the process.  On one of these reboots, we noted that the array was in a state 'resyncing'.  We let ESXi boot and went to the Health Status and, this time, noted that the storage controller had a warning and one of the drives was in status 'rebuilding'.  What was more, both of the VM's on the server had started and there were *no* errors.  We have shut down the VMs and are using the datastore browser to download them to another workstation (something that wasn't possible before, kept getting I/O errors) and are getting good throughput and no errors.  At this point, I have *no idea* what has changed on the box but it's running very well at the moment and we are moving bits across the drive controller with no problems.
0
All Courses

From novice to tech pro — start learning today.