Hyper-V Cluster - Failover Corrupts VM Config File

I have a 2-node Hyper-V cluster with a quorum disk that I have been testing various failover scenarios on.  All functions perfectly, except when I perform the following failure simulation.

1. shut down host that is not hosting VM (HOST02)
2. wait approximately 2 minutes for host to be "really down"
3. shut down host that is hosting VM (HOST01)
4. wait for first downed host to come back up
5. connect to cluster to find that the first downed host is not assuming the any roles.  I do have preferences set for each role to prefer HOST01 when it is available, but I didn't think that would keep HOST02 from hosting the role if HOST01 was down.
6. notice that VM being hosted is in a failed state with event IDs 1069 and 21502 filling the log.  What I have deduced is that it cannot locate the .xml config file for the VM at the location it is supposed to be on the CSV that is shared between the hosts.

The .xml file does, in fact, exist in the same location it was before the crash of both hosts.  The VM is not recoverable.   Is the XML file permanently corrupt?  Is this the expected behavior with Hyper-V clusters?  If so, what a horrible design!  I have to end up restoring the XML file from a backup before the VM will start.
LVL 1
marrjAsked:
Who is Participating?
 
marrjConnect With a Mentor Author Commented:
After doing more testing, it appears that what is really going on is that the CSV that the VM resides on is not reconnecting to the last host remaining in the cluster after a failure of that host.  This seems to be universally true for both hosts, no matter what order I purposefully fail them in.  The fix is to manually take the CSV offline in the Failover Clustering MMC snapin and manually bring it back online.  The VM will then successfully resume.  So, it looks like my cluster is going to require manual intervention if both nodes go down or restart.

The reason that a Commvault backup restore of the VM's config file brought the VM back online is that it would automatically bring the volume online as part of the restore process. I didn't know that would happen.

So, is there any way to automate this behavior so that my nodes don't require manual intervention after events such as a datacenter power outage.  I do have DR plans to survive such an outage at another site, but I will still ultimately have to turn the cluster back on when operations resume.
0
 
marrjAuthor Commented:
No one else answered in a timely manner.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.