Cluster Services will not start

Specifically, we are trying to setup a two-node cluster to provide a highly available apache server. After reviewing the documentation, it appears that shared storage may not be necessary, though we would like to have the document root be on shared storage eventually.


We have followed the steps laid out in the howto:

http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Administration/ap-httpd-service-CA.html


However when we try to start the service in Luci, the httpd service fails to start. We get the following errors in /var/log/messages:


Jun 29 16:22:36 habox1 clurgmgrd: [10855]: <err> Stopping Service
apache:Apache_Test_Srvr > Failed
Jun 29 16:22:36 habox1 clurgmgrd[10855]: <notice> stop on apache
"Apache_Test_Srvr" returned 1 (generic error)
Jun 29 16:22:36 habox1 clurgmgrd[10855]: <crit> #13: Service
service:Web_Server failed to stop cleanly
Jun 29 16:26:29 habox1 clurgmgrd[10855]: <notice> Starting disabled
service service:Web_Server
Jun 29 16:26:29 habox1 clurgmgrd: [10855]: <err> Looking For IP
Addresses [apache:Apache_Test_Srvr] > Failed - No IP Addresses Found
Jun 29 16:26:29 habox1 clurgmgrd[10855]: <notice> start on apache
"Apache_Test_Srvr" returned 1 (generic error)
Jun 29 16:26:29 habox1 clurgmgrd[10855]: <warning> #68: Failed to start
service:Web_Server; return value: 1
Jun 29 16:26:29 habox1 clurgmgrd[10855]: <notice> Stopping service
service:Web_Server
Jun 29 16:26:35 habox1 clurgmgrd: [10855]: <err> Checking Existence Of
File /var/run/cluster/apache/apache:Apache_Test_Srvr.pid
[apache:Apache_Test_Srvr] > Failed - File Doesn't Exist
Jun 29 16:26:35 habox1 clurgmgrd: [10855]: <err> Stopping Service
apache:Apache_Test_Srvr > Failed
Jun 29 16:26:35 habox1 clurgmgrd[10855]: <notice> stop on apache
"Apache_Test_Srvr" returned 1 (generic error)
Jun 29 16:26:35 habox1 clurgmgrd[10855]: <crit> #12: RG
service:Web_Server failed to stop; intervention required
Jun 29 16:26:35 habox1 clurgmgrd[10855]: <notice> Service
service:Web_Server is failed
Jun 29 16:26:35 habox1 clurgmgrd[10855]: <crit> #13: Service
service:Web_Server failed to stop cleanly


Can you advise us as to what the problem may be? Let us know if you need
more information.

my cluster.conf file created in web GUI (luci)
 
<?xml version="1.0"?>
<cluster alias="app_server" config_version="16" name="app_server">
        <fence_daemon clean_start="0" post_fail_delay="0" 
post_join_delay="3"/>
        <clusternodes>
                <clusternode name="habox2.nimh.nih.gov" nodeid="1" 
votes="1">
                        <fence>
                                <method name="1"/>
                        </fence>
                </clusternode>
                <clusternode name="habox1.nimh.nih.gov" nodeid="2" 
votes="1">
                        <fence>
                                <method name="1"/>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1"/>
        <fencedevices/>
        <rm>
                <failoverdomains/>
                <resources>
                        <apache config_file="conf/httpd.conf" 
name="Apache_Test_Srvr" server_root="/etc/httpd" shutdown_wait="0"/>
                        <ip address="172.16.52.151" monitor_link="1"/>
                        <script file="/etc/rc.d/init.d/httpd" 
name="script_test"/>
                        <fs device="/dev/hda2" force_fsck="0" 
force_unmount="0" fsid="36806" fstype="ext3" mountpoint="/HA" 
name="httpd_content" self_fence="1"/>
                </resources>
                <service autostart="1" exclusive="0" max_restarts="0" 
name="Web_Server" recovery="restart" restart_expire_time="0">
                        <apache ref="Apache_Test_Srvr">
                                <ip ref="172.16.52.151"/>
                                <script ref="script_test"/>
                                <fs ref="httpd_content"/>
                        </apache>
                </service>
        </rm>
</cluster>

Open in new window

Document1.pdf
Justin_EdmandsAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Justin_EdmandsAuthor Commented:
need some help!
0
JabbaDowCommented:
I think it might be easier to use hearbeat with DRBD: www.linux-ha.org and www.drbd.org. They work very nicely together. Basically you set up some information in their config files about each other, which IP address they will share, etc. Then at the end of it all, you have the heartbeat process start up, which then starts up the DRBD shared storage and places symlinks on the system to point to the shared storage. Heartbeat then handles the starting and stopping of Apache. It is pretty easy to do, and both projects are very well documented and there are many how-to articles on the web for doing exactly what you want to do.
0
arnoldCommented:
IMHO, it is better to load balance a web server rather than set it up in a fail over cluster.
You could use rsync to synchronize the document root data.

You could setup a cluster resource dealing with a specific IP.
This will deal with making the IP "available all the time"

One error I see is that you are not assigning an IP that will move with the web server.

You have to setup an IP that will move between/among the nodes.

0
Justin_EdmandsAuthor Commented:
already got DRBD to work and all. need to do RedHat Cluster Suite
0
JabbaDowCommented:
I have no experience with Red Hat Cluster Suite, but I guess the first thing would be to make sure that you have a virtual IP address (i.e. an address bound to a virtual interface like eth0:0), and make sure that that address is working on the active node of the cluster. When you failover to the other node, that address needs to follow the active node. Then set your Apache to listen on that address. So instead of Listen *:80, you need to have "Listen x.x.x.x:80" apache directive. Make sure that the networking comes up before apache does.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux

From novice to tech pro — start learning today.