Link to home
Start Free TrialLog in
Avatar of BzowK
BzowKFlag for United States of America

asked on

Clustering Network Issue After Rebuilding Node: Node A is Reachable from Node B by Only One Pair of Interfaces

Hey Guys -

I've got an issue I've been working on for a couple of days now and need help with.  Our company has a VDI cluster which has a total of 5 nodes.  Recently, one went down and was rebuilt.  I was told that all settings were configured as they should be and have verified for one that all of the NIC settings (static IPs, options enabled, etc) are correct or match the other hosts.

The problem is that in VMM (2008), the node is still listed as "Needs Attention."  When I run a validation on the cluster, I get many network-related issues that appear.  Below are examples of the two primary ones.  

Note:  Node C is the one which was rebuilt...

Error Type #1
Node C is reachable from Node B by only one pair of interfaces. It is possible
that this network path is a single point of failure for communication within the cluster. Please verify that
this single path is highly available or consider adding additional networks to the cluster.

Node D is reachable from Node C by only one pair of interfaces. It is possible
that this network path is a single point of failure for communication within the cluster. Please verify that
this single path is highly available or consider adding additional networks to the cluster.

Node C is reachable from Node D by only one pair of interfaces. It is possible
that this network path is a single point of failure for communication within the cluster. Please verify that
this single path is highly available or consider adding additional networks to the cluster.

Node C is reachable from Node E by only one pair of interfaces. It is possible
that this network path is a single point of failure for communication within the cluster. Please verify that
this single path is highly available or consider adding additional networks to the cluster.

Error Type #2
Network interfaces E - LiveMigration and C - LiveMigration are on the same cluster network, yet either address 10.50.7.23 is not reachable from 10.50.7.25 or the ping latency is greater than the maximum allowed 500 milliseconds.

Network interfaces E - LiveMigration and C - LiveMigration are on the same cluster network, yet either address 10.50.7.23 is not reachable from 10.50.7.25 or the ping latency is greater than the maximum allowed 500 milliseconds.

Network interfaces C - LiveMigration and E - LiveMigration are on the same cluster network, yet either address 10.50.7.25 is not reachable from 10.50.7.23 or the ping latency is greater than the maximum allowed 500 milliseconds.

Network interfaces C - LiveMigration and B - LiveMigration are on the same cluster network, yet either address 10.50.7.22 is not reachable from 10.50.7.23 or the ping latency is greater than the maximum allowed 500 milliseconds.

and so on... there are a total of 12 of the above error....

If it helps any, I restarted two of the nodes (including the one which was rebuilt) and received an IP Address Conflict message.  The error included the MAC of the NIC which was conflicting.  I found out which node the MAC was on and looked at it's IPv4 address (IPv6 disabled on all NICs / all nodes) and it didn't match any of the ones from the server that threw it - weird!

Any suggestions as to where to look or what to do?  Thanks!
ASKER CERTIFIED SOLUTION
Avatar of Philip Elder
Philip Elder
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of BzowK

ASKER

Good Morning -

Thanks for your replies & suggestions Phillip!  The good news is that I did get the other node to come back up so that the cluster now has 5 of 5 nodes online and active.  The cause of the issue was a few things including the Virtual Network Name on the node didn't match the cluster, the VMM agent hadn't been reinstalled on the rebuilt node, and a couple others.

However - I now have two new issues this brought that I hope you can assist with:

Issues #1 - "Unsupported Cluster Configuration" Status

We currently have about 200-300 VMs spread out amongst the nodes. After I got the rebuilt one back online, about 40 of them (spread out amongst all nodes) changed their status to "Unsupported Cluster Configuration."  I cannot find anything that makes these ~40 different via their configurations as all VMs are set to use High Availability.  The ones that have this status are still working as the ones which were started before may still be pinged & accessed, but I cannot do anything else with them.

Note:  I did find a PowerSHell script which I saw would help identify the issue if run, but it failed as the get-scvmhostcluster and other cmdlets couldn't be found so guess it only works for 2012+ (we run 2008)

Issue #2 - 6 Bad VMs

When bringing the rebuilt node back online, VMM showed that it had 6 VMs which were missing or in a bad state.  Some of the names it listed had previously been migrated to other nodes and are alive and working on the other nodes while others do not exist anywhere anymore.  How can these be resolved - especially without affecting VMs with the same names which are legit and working on the other nodes?

Thanks Guys - I appreciate your help!
This question has been classified as abandoned and is closed as part of the Cleanup Program. See the recommendation for more details.