Failover using rgmanager in RHCS5

Hey all,

 I have a cluster up and running with gfs, drbd, and gnbd. I also have drbd running primary/primary. I am trying to setup a failover ip for the 2 drbd servers so if one goes down the exported gnbd devices are still there. I set up the failover IP through RHCS5 using rgmanager but for some reason when I pull the plug on server1 the ip address does automatically go over to server2. I'm thinking maybe I didn't set up the resources correctly. any help is appreciated.
LVL 5
wilsjAsked:
Who is Participating?
 
ezatonCommented:
If you are sharing the filesystem using GFS on drdb, there is no need to share the disk resource. You don't need to play active/passive with it. I will look into your configuration tomorrow and post a reply.
0
 
ezatonCommented:
Can you post here you /etc/cluster/cluster.conf file?
0
 
wilsjAuthor Commented:
sure no problem. and thanks for the reply.

Dolphins is my storage, drbd, gfs, gnbd server
Patriots is replicating the data on Dolphins with drbd
lions is my xen server and imports the storage from Dolphins/Patriots.

The idea I had was is dolphins fails then just the ip needs to switch to patriots and the same gnbd devices will already be exported and the xen instances won't lose connection.



<?xml version="1.0"?>
<cluster config_version="9" name="nas-cluster">
        <fence_daemon post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="dolphins" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="nas-fencing" nodename="dolphins"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="lions" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="nas-gnbd-fencing" nodename="lions"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="patriots" nodeid="3" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="nas-fencing" nodename="patriots"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman/>
        <fencedevices>
                <fencedevice agent="fence_gnbd" name="nas-gnbd-fencing" servers="dolphins-storage-failover"/>
                <fencedevice agent="fence_manual" name="nas-fencing"/>
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="dolphins-drbd1" ordered="1" restricted="0">
                                <failoverdomainnode name="dolphins" priority="1"/>
                                <failoverdomainnode name="patriots" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <ip address="192.168.5.4" monitor_link="1"/>
                </resources>
                <service autostart="1" domain="dolphins-drbd1" name="dolphins-svc-drbd1" recovery="relocate">
                        <ip ref="192.168.5.4"/>
                </service>
        </rm>
        <fence_xvmd/>
        <fence_xvmd/>
</cluster>
0
 
wilsjAuthor Commented:
Am I supposed to create a resource for the partition that is being shared? If yes this is confusing for me. Because I already have the partition running on the other server patriots. All I want is for the ip to switch servers is this possible with red hat cluster suite 5? Thanks again.
0
 
wilsjAuthor Commented:
i finally figured it out thanks for the replies
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.