Win 2008 R2 cluster quorum disk 'disappeared' temporarily

We had a cluster service 'lost' for several minutes yesterday
(& this is a repeat incident): refer to attached screens.

Q1:
What could be the cause of the issue?

Q2:
Any resolution or workaround for this issue?

Q3:
Currently the heartbeat goes thru Production network.
Will setting up dedicated heartbeat (ie a direct cross-
cable between the 2 member servers of the cluster
resolve this?

Q4:
Or can we tune/tweak the heartbeat interval & the number
of missed heartbeat to make the cluster more resilient
ie to address this issue?  If so, can point me to a link or
provide instructions on how to tune these?

Q5:
Can this be due to differences in the various firmware
(UEFI, versions between the 2 member servers as IBM
found the versions to be different but can't pinpoint that
this issue is due to differences in firmware version of
the member nodes.

I prefer not to attach the Event Viewer logs here as it
contains sensitive info
1.jpg
2.jpg
3.jpg
4.jpg
5.jpg
6.jpg
10.jpg
ClusharedVols.jpg
Clustorage.jpg
sunhuxAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Seth SimmonsSr. Systems AdministratorCommented:
what kind of shared storage are you using?  any issues on that side either with it directly or connectivity to it?
0
sunhuxAuthor Commented:
It's just plain SAN LUNs that's presented to both servers.
We have similar setup at a few other sites' clusters but
did not encounter this issue.

This issue is not persistent, ie it happens every couple
of weeks or months
0
sunhuxAuthor Commented:
I've done  "cluster /prop" & below is the output.  Compared with all other
clusters & only one line (EnabledSharedVolume) is different from other
clusters (except I have one DR cluster which is also 1 but never had
this issue):

Attached are the cluster.log files from both nodes of the cluster
which I've sanitized: look for 2003/12/16-11:50  to  2003/12/16-12:00
date & timings

Listing properties for 'NNXXXPP1VIR03':

T  Cluster              Name                           Value
-- -------------------- ------------------------------ -----------------------
DR NNXXXPP1VIR03        FixQuorum                      0 (0x0)
DR NNXXXPP1VIR03        IgnorePersistentStateOnStartup 0 (0x0)
SR NNXXXPP1VIR03        SharedVolumesRoot              C:\ClusterStorage
D  NNXXXPP1VIR03        AddEvictDelay                  60 (0x3c)
D  NNXXXPP1VIR03        BackupInProgress               0 (0x0)
D  NNXXXPP1VIR03        ClusSvcHangTimeout             60 (0x3c)
D  NNXXXPP1VIR03        ClusSvcRegroupOpeningTimeout   5 (0x5)
D  NNXXXPP1VIR03        ClusSvcRegroupPruningTimeout   5 (0x5)
D  NNXXXPP1VIR03        ClusSvcRegroupStageTimeout     7 (0x7)
D  NNXXXPP1VIR03        ClusSvcRegroupTickInMilliseconds 300 (0x12c)
D  NNXXXPP1VIR03        ClusterGroupWaitDelay          30 (0x1e)
D  NNXXXPP1VIR03        ClusterLogLevel                3 (0x3)
D  NNXXXPP1VIR03        ClusterLogSize                 100 (0x64)
D  NNXXXPP1VIR03        CrossSubnetDelay               1000 (0x3e8)
D  NNXXXPP1VIR03        CrossSubnetThreshold           5 (0x5)
D  NNXXXPP1VIR03        DefaultNetworkRole             2 (0x2)
S  NNXXXPP1VIR03        Description
D  NNXXXPP1VIR03        EnableSharedVolumes            1 (0x1) <==
D  NNXXXPP1VIR03        HangRecoveryAction             3 (0x3)
D  NNXXXPP1VIR03        LogResourceControls            0 (0x0)
D  NNXXXPP1VIR03        PlumbAllCrossSubnetRoutes      0 (0x0)
D  NNXXXPP1VIR03        QuorumArbitrationTimeMax       20 (0x14)
D  NNXXXPP1VIR03        RequestReplyTimeout            60 (0x3c)
D  NNXXXPP1VIR03        RootMemoryReserved             4294967295 (0xffffffff)
D  NNXXXPP1VIR03        SameSubnetDelay                1000 (0x3e8)
D  NNXXXPP1VIR03        SameSubnetThreshold            5 (0x5)
B  NNXXXPP1VIR03        Security Descriptor            01 00 04 80 ... (280 byte
s)
D  NNXXXPP1VIR03        SecurityLevel                  1 (0x1)
M  NNXXXPP1VIR03        SharedVolumeCompatibleFilters
M  NNXXXPP1VIR03        SharedVolumeIncompatibleFilters
D  NNXXXPP1VIR03        ShutdownTimeoutInMinutes       20 (0x14)
D  NNXXXPP1VIR03        WitnessDatabaseWriteTimeout    300 (0x12c)
D  NNXXXPP1VIR03        WitnessRestartInterval         15 (0xf)
DB1clusani.txt
DB2clusani.txt
0
10 Tips to Protect Your Business from Ransomware

Did you know that ransomware is the most widespread, destructive malware in the world today? It accounts for 39% of all security breaches, with ransomware gangsters projected to make $11.5B in profits from online extortion by 2019.

NetfloCommented:
I would strongly recommend getting your segmented heartbeat network up and running. Also when you get downtime, can you validate your cluster as that may highlight some issues with your configuration.

I'm assuming the basics have been covered, such as rebooting switch, switching network cables out.

Also are your NIC configured to go to sleep - this is a default setting under the NIC power management setting, turn that off.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
sunhuxAuthor Commented:
> are your NIC configured to go to sleep - this is a default setting under the
> NIC power management setting, turn that off.
No, certainly it's not, else I would have got Tivoli alert for ping
timeouts to both DB servers'  teamed NICs' IP addr.

>  validate your cluster as that may highlight some issues
We have run this & only a couple of warnings with the rest
Successful.  I can post it this Mon/Tue after sanitizing the
outputs.  For sure, it's the same result as when we first
install the cluster & if any of the error/warnings are serious,
cluster installation would not have been allowed by Windows
in the 1st place, right?
0
NetfloCommented:
In many occasions I've seen many IT admins tweak certain areas rendering the cluster to become invalid and then wonder why such issues such as Live Migration don't work.

Has all components been rebooted in your cluster?

Are the cluster hosts fully up to date? If yes, you may want to take a look at the following hotfixes: http://support.microsoft.com/kb/2545685
0
sunhuxAuthor Commented:
Yes, all components have been rebooted.

Well, I'm not sure if it's really necessary to apply the hotfixes as the
other sites' cluster uses exactly the same Win 2008 R2 with same
patch level.

Only difference is the hardware type & firmware revision:
this site uses a slightly newer version of IBM x3850 hardware
compared to the others sites that are stable.

Is there anything we can analyse from the logs to pin-point
the root cause so that we can come up with a sure-fire
resolution?
0
NetfloCommented:
If you open a support case with Microsoft Support, they will tell you that its recommended to apply the hotfixes - I've gone through the same issue recently.

The only advice I can offer is get your hardware and software up to date. Cover the basics in terms of changing cables etc. Plug in a switch to isolate the heartbeat network.

Lastly disable 'large send offload' on your NICs, does that improve your situation?
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Windows OS

From novice to tech pro — start learning today.