Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 431
  • Last Modified:

RH5 cluster problem

I have a RH5 cluster on ESX server that recently is not functioning.
Both nodes are up, but the cluster service is stopped and I am unable to enable it or to start it on any of the nodes:
 Luci — clusterIt might be related that during boot on the first node I start (either of them) at the "starting cluster" phase the "starting fencing" takes more than 5 minutes, but on the second node the "starting fencing" is done immediately…

Any idea how to solve it?

Thanks,
Tal
0
questil
Asked:
questil
  • 3
1 Solution
 
Kerem ERSOYPresidentCommented:
Hi,

It might be about fencing. If it can not run fencing it might not starting up the node and when there's no quorum (if you have one) ten it might not be starting the entire cluster.

Can you post your configuration file and your log uoutputs for the related tems. Your configuaration should be under /e/etc/cluster. The most important of config files is cluster.conf file.

Cheers,
K.
0
 
questilAuthor Commented:
To which log file/s are you referring to?
Here is the /etc/cluster/cluster.conf:
cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster alias="rh5clu" config_version="45" name="rh5clu">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="rh5clu1" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="VC" port="rh5clu1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="rh5clu2" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="VC" port="rh5clu2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices>
                <fencedevice agent="fence_vmware" ipaddr="vc009" login="administrator" name="VC" passwd="********"/>
        </fencedevices>
        <rm>
                <failoverdomains/>
                <resources>
                        <fs device="LABEL=SHARED1" force_fsck="0" force_unmount="1" fsid="2375" fstype="ext3" mountpoint="/shared1" name="shared1" self_fence="0"/>
                        <ip address="10.10.1.10" monitor_link="1"/>
                        <orainstance home="/oravl01/oracle/product/11.2.0.1" name="O1121CLU" user="ora"/>
                        <oralistener home="/oravl01/oracle/product/11.2.0.1" name="LISTENER" user="ora"/>
                </resources>
                <service autostart="1" exclusive="1" max_restarts="3" name="cluster" recovery="restart" restart_expire_time="0">
                        <ip ref="10.10.1.11"/>
                        <fs ref="shared1"/>
                </service>
        </rm>
</cluster>
0
 
questilAuthor Commented:
I run “clusvcadm -e cluster” and from the output of /var/log/messages I found the problem:  FS problem on the shared device.
After fsck the problem Solved.
0
 
questilAuthor Commented:
That's the Solution.
0

Featured Post

Learn Veeam advantages over legacy backup

Every day, more and more legacy backup customers switch to Veeam. Technologies designed for the client-server era cannot restore any IT service running in the hybrid cloud within seconds. Learn top Veeam advantages over legacy backup and get Veeam for the price of your renewal

  • 3
Tackle projects and never again get stuck behind a technical roadblock.
Join Now