Improve company productivity with a Business Account.Sign Up

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2487
  • Last Modified:

Dell PowerEdge T310 + ESXi v4 Lost acces to volume due to connectivity issues.

We have a Dell PowerEdge T310 running ESXi v4.0 and two production VM's (one Windows Server 2003 and one Ubuntu Linux).  All of the storage for the server is local in 2 1TB SATA drives.  It's been running flawlessly for approximately 200 days (since installed) but, beginning yesterday, is starting to randomly go offline.  In the event log for the server, I see a series of messages "Lost access to volume <long number> (datastore1) due to connectivity issues.  Recovery attempt is in progress and outcome will be reported shortly.".  This error is showing up at random intervals every few minutes on the server all of a sudden.
0
cybertechcafe
Asked:
cybertechcafe
  • 8
1 Solution
 
cybertechcafeAuthor Commented:
Googling now but, is there any way to get to the service console remotely (e.g., without having to have hands on the physical console)?
0
 
MrN1c3Commented:
You cant hit the service console remotely if its running esxi.  Do you have a DRAC card on your T310?
0
Easily Design & Build Your Next Website

Squarespace’s all-in-one platform gives you everything you need to express yourself creatively online, whether it is with a domain, website, or online store. Get started with your free trial today, and when ready, take 10% off your first purchase with offer code 'EXPERTS'.

 
cybertechcafeAuthor Commented:
To be honest, I'm not sure.  I'm not terribly familiar with the environment (yet) and am still filling my way around.  Looking at everything else though, I suspect that the answer is no.  If that's my only option, looks like it's time for a site visit.
0
 
cybertechcafeAuthor Commented:
I believe that a site visit is going to be my best option here (there are obviously a few things that I need to discover about the site).  My plan at this point is the following:

- Check to make certain that the box has the latest BIOS
- Check to make certain that the firmware is up-to-date on the box
- Start it and see if we still see the errors (a lot of what I'm seeing seems to indicate that this is either a hardware issue or a firmware issue.  Since it has been working well for so long and, to my knowledge, there have been no changes, I fear that it's more likely hardware than firmware, but I am hoping)
- If the errors are still there, head down the road below
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1009557
- My only concern with the link above is that it seems very specific to shared storage and fibre channel, but this is an on board RAID controller (Dell / PERC) and not a SAN or NAS device.
0
 
cybertechcafeAuthor Commented:
The drives on the server are a mirrored.  We have another ESXi server available that we can use as a stand in while this one is down.  I would like to be able to copy the VM from the semi-dead ESXi server to the stand-in-server but am unable to do so from the datastore browser (keep getting I/O errors).  Is it possible for me to remove one of the drives and, using a USB drive cage or something, mount it in something like Linux and just copy the files to the other server?  Will Linux be able to see the VMFS?

Removed reference to illegal CD.

rindi,
EE ZA Storage
0
 
cybertechcafeAuthor Commented:
One other thing that I just noticed is that, from Host -> Configuration -> Health and Status, there is a Warning and the status of the drive controller seems to be flapping (unknown / normal).
0
 
cybertechcafeAuthor Commented:
Also, the box has dual power supplies and the status of both is 'unknown'.  Do not know if that is normally 'normal' or if 'unknown' is typical.
0
 
cybertechcafeAuthor Commented:
Ok, just an update.  We arrived on site to begin the [long, arduous] process of recovery and rebooted the server a couple of times in the process.  On one of these reboots, we noted that the array was in a state 'resyncing'.  We let ESXi boot and went to the Health Status and, this time, noted that the storage controller had a warning and one of the drives was in status 'rebuilding'.  What was more, both of the VM's on the server had started and there were *no* errors.  We have shut down the VMs and are using the datastore browser to download them to another workstation (something that wasn't possible before, kept getting I/O errors) and are getting good throughput and no errors.  At this point, I have *no idea* what has changed on the box but it's running very well at the moment and we are moving bits across the drive controller with no problems.
0
 
cybertechcafeAuthor Commented:
Ok, the initial problem of not having connectivity to the hard drives seems to be behind us.  At the end of the day, nothing was really done to *fix* the problem, it just started working again.  We did find that something (still trying to find out what) caused the RAID array (mirror) to degrade and, I suspect, that degraded array was a big part of the problem (understandably very slow while it was attempting to rebuild the array).
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Improve Your Query Performance Tuning

In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

  • 8
Tackle projects and never again get stuck behind a technical roadblock.
Join Now