Auto start Redhat cluster services cman clvmd qdiskd ... on server reboot; cant detect SAN partition
Posted on 2009-04-25
My colleague has difficulty with this : he said he could only start up the various
Redhat (RHES 5.1) cluster services manually after the Redhat LInux OS is
rebooted/booted up :
a) cd /etc/init.d
b) ./cman start
c) ./clvmd ...
d) ./qdiskd ...
e) ./rgmanager ...
I'm not sure if the sequence of d or e should be interchanged.
Anyway, we would like all the above to be automatic started when the
pair of linux servers (call it lnx3 and lnx4) are booted up.
Would the following actually help or what we're missing :
Enable cluster software to start upon reboot. At each node run /sbin/chkconfig as follows:
# chkconfig --level 2345 rgmanager on# chkconfig --level 2345 gfs on# chkconfig --level 2345 clvmd on
# chkconfig --level 2345 cman on
We're not using GFS as the Netapp SAN partition is probably mounted as
Unix file system (UFS)
Also, occasionally, when both servers lose connection to the network
together, the SAN partition would be lost and despite rebooting the
servers and running those commands given in a-e above on the primary
node alone (or both nodes), we are not able to mount the SAN partition.
Sometimes after waiting for about ten minutes, it's able to mount. So
what did we miss, or is this just a matter of waiting or between the a-e
services, we need to pause for a while before proceeding to the next
During bootup, at the console, it will show lcfc... driver (this must be the
fibre channel driver to connect to the SAN) fails to install and
a minute following that, one of the service (think it's clvmd would fail.
However, after bootup & login, issuing "clvmd" after waiting a while
would sometimes work, but sometimes fail. Is it I have to wait longer
& how can I tweak the delays?
How do I know if our current SAN partition is GFS or not? (As the
previous person has left after setting up the cluster which I felt is
still missing something)
How can I convert it to GFS if it's not on GFS yet?
We're not using Oracle RAC, how can I set up Oracle instances as
part of the cluster failover service?
We have a pair of cross cables (as fencing) connecting NIC ports of
lnx3 & lnx4 servers : at all times, only one of the fencing NIC port's LED
on lnx3's is on while on lnx4, it's also one of the NIC port's LED is on.
Is this normal?
Rrather green to this, so need step by step instruction