Lefthand P4300 and VMWare vSphere 4 Issue

Maybe some of you guys can weigh in on this:

Environment:
Lefthand p4300 starter array with two storage nodes (lefthand1 and lefthand2) set up with two-way replication.  In the lefthand management console, I can see lefthand1, but lefthand2 only shows as its private IP and is unreachable.  Last night, around 10CST, it appears a double drive failure of lefthand2 occurred.  This is a RAID 5 array.

Three Dell R710's serve as hosts (vmhost1, vmhost2, vmhost3) and a fourth R710 serves as the vcenter server.

vSphere 4 is running on all servers in a cluster connected to the iSCSI array via a private network 192.168.1.x

The issue is that one of the hosts can't really see the array (vmhost 2).  I'm guessing it's because of the issue with lefthand2.  I would think that since LH2 is not reachable, it would use the replicated information on LH1 to provide storage resources to the VM array.  However, this does not appear to be the case.

Doing anything in vCenter takes a long time.  I've tried to rescan all storage adapters but it is either locked or taking forever.  I don't know if I should break the 2 node storage management group and see if I can access the datastores on the good array or what.  I've got a a call into HP support and a ticket opened, but have yet to hear anything.  

Anyone seen or heard of something similar before?  Any insight would be greatly appreciated.

Thanks,

Ben
bwhortonAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
what version are you running?

and get it escalted to HP, for the private fixes.
bwhortonAuthor Commented:
The Lefthand is running software version 8.1

Working to get it escalated with HP...
Glen KnightCommented:
Can you post a screenshot of the HP Console?

When you look at the physical console of both left hand nodes what do you see on the screens?
Need More Insight Into What’s Killing Your Network

Flow data analysis from SolarWinds NetFlow Traffic Analyzer (NTA), along with Network Performance Monitor (NPM), can give you deeper visibility into your network’s traffic.

Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
8.1 is ancient 9.5 is the latest, and 9.5/9.6 have private fixes from HP, because of issues with VMware vSphere 4.x.
Glen KnightCommented:
Also how have you got the nodes bonded? And presumably both NIC's are connected to the network?

At you able to ping the bond IP using the network tools in the console from the working node?
bwhortonAuthor Commented:
See attached.  If you need specific information from one or more areas, let me know.
Screen Capture 1
bwhortonAuthor Commented:
Can ping 192.168.1.101 from vcenter which is lefthand1.

Cannot ping 192.168.1.102 from vcenter which is lefthand2


All private iSCSI traffic is connected through a Brocade FES switch.
Glen KnightCommented:
I will be in front of my left hand console in about half an hour.

What do you see on the physical consoles for the left hand boxes?
bwhortonAuthor Commented:
They are in a different location.  I'm remoted in to the vcenter server where the LH CMC is running.  Do I need to hook up a kvm to the lefthand boxes and let you know what I see or are you referring to the drives/light on the front of the devices?  Sorry for my ignorance.
Glen KnightCommented:
Well, it's possible if you had 2 drives "fail" that the RAID Controller may be paused waiting for a response.

I had this recently when it though 3 drives had failed.  The drive failing issue is resolved in version 9.x that's not to say yours may not have actually failed but it is a known issue with left hand falsely reporting drive failures.

On 2 of my nodes I had around 12 disks fail in the space of 2 months.  As I replace 1 from each RAID array every 4 months it's unlikely it's down to "batches".  After an upgrade the issue seems to have slowed down.
bwhortonAuthor Commented:
So are you suggesting an upgrade to the software version as a first step to see if it resolves?
Glen KnightCommented:
Not yet.  Let's get the other node back online first.

You are going to need to get a screen attached to see what's going on.
bwhortonAuthor Commented:
There's a screen shot above, but I'm guessing you need a specific area for me to capture.  Just let me know which and I'll do that.  Thanks
Glen KnightCommented:
Sorry, I mean the physical box.  Monitor/keyboard/mouse or KVM depending how you are setup
bwhortonAuthor Commented:
Working with HP engineer on phone now.  First obvious issue is that it appears lh2 ip settings are gone.  Will post more when I learn more. Thanks
bwhortonAuthor Commented:
I am closing this post.  HP support determined that a drive failure, combined with a software failure (possibly firmware related, but couldn't be sure until logs were reviewed) created system instability in one node.  That node happened to also run the FOM so it couldn't create a quorum as intended.  We removed the FOM from the management group, created a virtual manager, and it created the quorum.  We also had to reseat the controller card in the failed node to get it to re-initialize which removed the software "hang".  

Since HP actually provided the solution, but demazter and hanccocka were quick to respond and offer support, I will award a splitting of the points to both of you.

Thanks!

Ben

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
no point split here, but dont worry about it!
bwhortonAuthor Commented:
hancocka, i opened a ticket with the mod when I saw that points weren't split.  They should be addressing it shortly.  Thanks!
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
thats jolly good of you! thanks
bwhortonAuthor Commented:
Self-supported with guidance from HP Support Engineers to diagnose and remedy the problem.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Server Hardware

From novice to tech pro — start learning today.