HPUX 11i v3 HA Cluster Configuration setup for Service Guard

I looking for tips and howtos in setuping HA cluster configuration. I have two BL870c i2 in  a c7000 enclosure. I currently have both BL870c i2 blades connected to the same emc san?
Also I have ServiceGuard installed.
lsbrown1Asked:
Who is Participating?
 
tfewsterConnect With a Mentor Commented:
Setting up a Serviceguard cluster is well documented in the "Managing MC/ServiceGuard" manual and in the "Software Recovery Handbook" (an engineers guide), both available from the HP websites with a bit of searching. (Or just Google "Serviceguard Cluster ;-)

Set up as much redundancy as possible; Multiple Management Modules, PSUs, network interfaces, switches, SAN interfaces. Each server should have 2 separate paths to the EMC SAN, one via each Management Module. Ideally a Cluster should use separate hardware, but a Blade chassis has inbuilt redundancy so it's not unreasonable have both servers in one box unless you have a major disaster.

Use all available network paths between the two servers as Heartbeats, dedicated or not.

For a 2-node cluster, you need a Cluster Lock mechanism. As you're running both "servers" in a blade chassis with networks managed via the "Hypervisor", I'd suggest using a Cluster Lock Disk rather than a separate Quorum Server, as a network problem could take down the Cluster unnecessarily. The Lock Disk can be any shared LVM storage, even one used for data - The lock resolution mechanism doesn't overwrite the data section.

Make time for testing - Normally that would mean killing cmcld and pulling network and power cables to check the resilience and failover mechanism worked as expected. Within a c7000, also check that someone (else) changing the configuration can't kill both "servers" at once.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.