Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people, just like you, are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions

Project on Oracle Parallel server on sun cluster

Posted on 2001-06-13
Last Modified: 2013-11-15
Hi ,

     Can I get a document on project done on oracle parallel server on sun cluster . Please help ,Thanks in advance.

With kind regards
Question by:gopdiv
  • 2

Accepted Solution

ORACLEtune earned 100 total points
ID: 6189599
Doc ID:  Note:119225.1
Subject:  Oracle8 Parallel Server for SunCluster
 Content Type:  TEXT/PLAIN
Creation Date:  07-SEP-2000
Last Revision Date:  25-OCT-2000

This article is being delivered in Draft form and may contain
errors.  Please use the MetaLink "Feedback" button to advise
Oracle of any issues related to this article.

This fact sheet documents how Oracle

This document is intended for people who administer Oracle Parallel

Oracle Parallel Server For Sun
The Oracle8i database - together with Oracle Parallel Server - yield dramatically
higher scalability and availability for high volume,
mission-critical online transaction processing (OLTP), data warehouse, and Internet
applications. These benefits are achieved by
exploiting the power and redundancy of clustered computer systems.

Oracle8i Parallel Server has been enhanced with the introduction of Cache Fusion technology.  Cache Fusion uses modern cluster
interconnect to reduce disk I/O and can exploit emerging very high bandwidth low latency interconnects to provide increased

Oracle Parallel Server architecture and management infrastructure combine the benefits
of cluster scalability and availability with
single system management capabilities. Oracle Parallel Server fully exploits clustered systems for database applications by
delivering the following benefits:

     High Availability - Cluster technology insulates application and database users
from hardware and software failures. Oracle
     Parallel Server architecture enables all the nodes of the system to access all
the data and provides inherent fault resilience.

     Cluster Scalability - Clustering enables more users and greater transaction
throughput, especially for enterprise applications
     and Internet commerce applications. Oracle Parallel Server achieves greater scalability by fully exploiting the expanded
     CPU, memory and disk resources of the clustered system to drive more transactions.

     Single System Manageability - Parallel Server?s new Single System View technology increases manageability by reducing
     the complexity of managing a server cluster. With Single System View, an entire cluster of servers appears as one system
     to the administrator.

Oracle Parallel Server provides all of these benefits together with flexible implementation.
 The parallel database architecture
enables users to increase system capacity in step with application growth and does not require re-partitioning data when
processor nodes are added.

Oracle Parallel Server sets a new standard for scaling applications with cluster load balancing and Cache Fusion clustering
architecture, becoming the most effective and necessary solution for increasing application throughput and system availability.
Combined with Single System View cluster management capabilities that deliver ?perform once and replicate everywhere?
management features, Oracle8i with Oracle Parallel Server provide the highest scalability and availability for today?s mission
critical applications.
Introduction to Sun Cluster 2.2
Sun? Cluster

Today, predictable availability of information and services is a critical business
necessity. There's little or no tolerance for downtime. Planned or
unplanned, that means lower productivity and lost revenues. Sun? Cluster 2.2 software
is designed specifically to solve this problem.

Sun Cluster delivers high availability -- through automatic fault detection and recovery
-- and scalability, ensuring that your mission-critical applications and services
are available when you need them. Leveraging and extending the reliability, scalability,
and performance of the industry-leading Solaris Operating Environment,
Sun Cluster provides mainframe-class reliability, availability, and scalability for
e-commerce, ERP, data warehousing, and other mission-critical applications and

A cluster is a group of nodes that are interconnected to work as a single, highly
available and scalable system.  A node is a single instance of Solaris software -- it
may be a standalone server or a domain within a standalone server. Sun Cluster scales
up to 256 processors in a cluster -- enough to handle growing numbers
of simultaneous users and access to large databases. With Sun Cluster, you can add
or remove nodes while online, and mix and match servers to meet your specific

Sun Cluster 2.2 is the next phase of Sun's Full Moon clustering roadmap. It extends
the functionality of Sun Cluster 2.1 and Solstice? HA 1.3, incorporating high
availability and parallel database functionality in a single offering. Plus, Sun
Cluster provides support for a variety of storage/file systems and up to four nodes.

High Availability

With Sun Cluster software installed, other nodes in the cluster will automatically
take over and pick up the workload when a node goes down. It delivers
predictability and fast, agile recovery capabilities through features such as local
application restart, individual application failover, and local network adaptor failover.
Sun Cluster significantly reduces downtime and increases productivity by helping ensure continuous service to all your users.


With Sun Cluster software, you can cluster up to four nodes to meet the performance
and manageability needs of your workgroup, department, or data center. By
allowing an application to scale across multiple servers in this manner, Sun Cluster
delivers increased performance and throughput.

Ease of Management

Sun Cluster software includes the Sun Cluster Manager, a management utility based
on Java? technology. This enables you to manage the entire cluster as a single
system, using an easy-to-use graphical user interface from any desktop that supports a Java virtual machine.

Investment Protection

Sun Cluster provides the scalability to grow with your business, protecting your
investments in Sun servers and storage systems. Your network can start with a
standalone server, and as the need for higher availability or scalability develops,
you can cluster domains within that server. As your business grows, you can easily
add new servers to the cluster for increased service capabilities.

Disaster Recovery (Campus Clusters)

Sun Cluster enables nodes to be separated by up to 10 kilometers. This way, in the
event of a disaster in one location, all the mission-critical data and services that
your business depends on will remain available from the other unaffected locations.

Flexibility in Integration
Sun Cluster lets you run both standard and parallel applications on the same cluster.
It supports the dynamic addition or removal of nodes, and enables Sun servers
and storage products to be clustered together in a variety of configurations. Existing
resources are used more efficiently, resulting in additional cost savings. Sun
Cluster allows Sun Enterprise? 10000 domains can be clustered together or with other Sun servers.

Cluster technology is a proven architecture for high-availability and scalability
in business-critical computing environments. This
has been demonstrated in high-end applications and systems. But increasingly these
characteristics are no longer just for the high
end. Internet applications require 24x7 availability, even for smaller businesses.
Smaller systems can exploit the emerging cluster
technologies based on high performance, low cost platforms from multiple system vendors.

A cluster is a group of independent nodes working together as a single system. The primary cluster components are processor
nodes, a cluster interconnect and shared disk subsystem. Clusters share disk, not memory.

A cluster can be made up of multiple single-processor or SMP nodes and each node has its own dedicated system memory. Each
node also has its own operating system, database and application software. While
a variety of cluster disk architectures exist, all
major cluster hardware platforms provide a unified logical view of the cluster -
all nodes have a consistent view of all disks. Most
cluster hardware vendors provide this through direct physical connectivity of nodes to disks. Other
cluster hardware vendors provide a scheme in which each node owns a subset of the disks and disk sharing is performed via an
efficient software abstraction layer.

Clusters provide improved fault resilience and modular incremental system growth over symmetric multi-processor systems.
Clustering delivers high availability to users in the event of system failures.
High availability is achieved by providing redundant
hardware components such as redundant nodes, interconnects and disks. These redundant cluster hardware architectures avoid
single points of failure and provide a high degree of fault resilience.

For example, the online inventory database system is the heart of an electronic commerce
retail operation. Users require ongoing
access to the web site?s database of products, codes, names, and prices to place
orders. If the online inventory database system
fails, sales cannot be logged, service suffers and the operation loses business. Clusters provide the hardware foundation for
database services and transaction loads to be failed over to surviving nodes of the cluster in the event of a
failure on a given node.

Clusters systems provide modular incremental growth capabilities that allow for growing the hardware processing power in lock
step with the application. As application users and transaction throughput increases, clusters can be grown in building block
fashion to provide greater CPU, memory and disk resources to scale the application users and workload.

A cluster provides a single virtual application space in which all application transactions
can be performed - transactions can span
CPU and memory resources of multiple cluster nodes. In this manner, clusters provide the hardware foundation for application
load balancing across cluster nodes.  Since clusters provide a unified view of all
disks on the system, systems administration and
application management operations can be centralized. Applications that are cluster-aware, or have the ability to discover the
underlying hardware platform is a cluster platform, can exploit this single system
view for all management operations. This enables
management operations to be performed once and replicated to all nodes of the cluster.

Clusters play a key role in providing the back end requirements for data warehousing, mission critical on-line transaction
processing, intranets and Internet applications. With the proliferation of high volume
Internet self service applications, clusters play
a key role in managing growth and reducing downtime. Internet applications can also
operate on a cluster as if the cluster were a
single high performance, highly reliable server. Many hardware vendors are racing
to establish high performance cluster standards
and provide superior cluster implementations to embrace these standards.

Cluster hardware technologies are currently undergoing a product revolution. While SMPs continue to grow in CPU and memory
capacity, the cluster interconnect is a key technology for providing cluster scalability. Modern cluster interconnects provide
extremely high bandwidth with low latencies for message traffic.  These hardware interconnects provide for fast communications
between cluster nodes, efficient message passing and application load balancing. High performance interconnects come in a
variety of architectures, including switch fabrics, fiber channel, memory channel, and mesh.

Commodity interconnect standards, such as Virtual Interface Architecture (VIA), are rapidly emerging among Intel based
platform and interconnect vendors. Standards such as VIA allow vendors to build clusters from standard high volume (SHV)
commodity components with superior price/performance.

More application users and transactions translate to greater disk requirements. Emerging Storage Area Networks (SAN) provide
sophisticated schemes for disk connectivity. These robust storage networks circumvent
the limitations of directly attached disks,
allowing a cluster node to be connected to a large number of disk devices. These SAN architectures lay the foundation for
mainframe-like capabilities for providing dedicated CPUs for I/O. Hardware vendors provide modern disk sharing technologies in
a variety of forms, such as Fiber Channel, Switch or Hub topology, and Shared SCSI.


Oracle Parallel Server adds parallel technology to Oracle8i to enable multiple Oracle
instances to execute on nodes of a cluster
and concurrently access a single database. Since these parallel capabilities are designed and built into Oracle8i, identical
functionality is provided in both clustered and non-clustered environments.

There are two major components that manage the parallel environment - Parallel Cache Management and Cluster Group
Services. Typically, a single Oracle instance is executing on each of the nodes comprising the cluster. An Oracle instance is
composed of processes and shared memory.  Within the shared memory is the buffer cache for an instance. The buffer cache
contains disk blocks and improves performance by eliminating disk I/O. Since memory cannot be shared across nodes in a
cluster, each instance contains its own buffer cache. The parallel cache manager uses the DLM to coordinate access to data
resources required by the instances.

In addition to the parallel cache management of data blocks, several other resources require coordination by Oracle Parallel
Server across instances, including dictionary, rollback segments and logs. The DLM is also used to control these resources.

The other key component is Cluster Group Services (CGS). CGS interacts with the Cluster Manager (CM) to track cluster node
status and keeps the database aware of which nodes form the active cluster.  The CM is a vendor-supplied component specific to
the hardware and OS configuration.

Parallel Cache Management is the technology that provides concurrent multi-instance access to the database. Synchronization of
access is required to maintain the integrity of the database when one instance needs to access or update data that has been
modified by another instance and is in the buffer cache of that instance.   Cluster
nodes accessing the data in changed blocks are
not able to read a current form of that data from disk. Parallel Cache Management allows cluster nodes to read changed data
residing in the buffer cache of a remote node and employs the DLM for necessary communications and synchronization.

Three cases must be considered when different Oracle instances access the same data:

     Read/Read - User on node 1 wants to read a block that user on node 2 has recently read.
     Read/Write - User on node 1 wants to read a block that user on node 2 has recently updated.
     Write/Write - User on node 1 wants to update a block that user on node 2 has recently updated.

The read/read case does not require coordination between the two instances. The instance on node 1 can read the block directly
into the cache. However, in the two cases where the block is being updated, coordination between the instances becomes

The parallel cache manager maximizes concurrent data access and minimizes I/O activity.  Performance is improved and overhead
reduced by not releasing the DLM locks until another instance on the cluster makes a request on the same block. In real world
applications, especially with disjoint sets of data, the probability of an instance
reusing a block that it just accessed is much higher
than that of another instance using the same block.

If node 1 needs access to updated data currently residing in node 2's buffer cache, node 1 will
submit a request to the DLM. The DLM facilitates node 1 access to data in the remote cache. Parallel cache management ensures
that a buffer cache in a node remains consistent with the shared buffer caches in
other nodes. If instance 1 running on node 1 is
updating block n, a copy of block n is in the local cache of node 1. Once instance
1 commits the transaction, it will still hold a
DLM lock on block n, and that block will remain in the local cache of node 1.  Typically,
applications exhibit locality of reference.
Therefore, this strategy minimizes disk I/O and DLM activity.

This robust parallel cache management scheme has been further optimized by a major change in Oracle8i called Cache Fusion.

Conventional scheme for parallel cache management uses a disk-based approach. The node with the data block writes the data
and related undo information to disk, then the requester node reads the data from disk. The system performs disk I/O and
requires CPU utilization for operating system context switching.

Cache Fusion in Oracle8i enables the buffer cache of one node to ship data blocks directly to the buffer cache of another node.
This eliminates the need for expensive disk I/O in parallel cache management. Oracle8i also leverages rapidly emerging
interconnect technologies for low latency, user-space-based, inter-processor communication. This drastically reduces CPU
utilization by reducing OS context switches for inter-node messages and provides low latency message capabilities across fast
cluster interconnects.

Oracle8i Cache Fusion directly ships data blocks from one node?s cache to another node?s cache in read/write contention
situations. This reduces overhead for non-partitioned applications demonstrating a mix of query and update. Oracle8i manages
write/write contention cache coherency via traditional disk-based parallel cache management mechanisms. The next major release
of Oracle8i will extend Cache Fusion to include the write/write case to reduce the disk I/O resource
utilization even further.


Because Oracle Parallel Server is an environment where multiple nodes concurrently access disk, the DLM component plays an
important role in protecting data integrity by coordinating concurrent data access when recent data updates are memory resident
and have not yet been flushed to disk. This is necessary because clusters do not share memory. The DLM comprises two
background processes called LMON and LMD. Starting with Oracle8, the DLM has been integrated into database system for
performance reasons.

The Integrated DLM uses redundant, fault-tolerant, and scalable algorithms in its
implementation to ensure high scalability, high
availability, high throughput, and speedy internal recovery from node failures. This component allows multiple resources to be
shared by multiple nodes, and synchronizes modifications made to Oracle database files so no changes are lost.

Oracle Parallel Server uses Integrated DLM to synchronize disk block access between multiple nodes.  Suppose node 1 in a
cluster needs to modify block number n from the database file. At the same time, node 2 needs to update the same block n to
complete a transaction. Without Integrated DLM, nodes 1 and 2 could update the same block at the same time. In this case, only
the changes written to the disk by the second node would be saved and the changes by the first node would be lost. The
Integrated DLM ensures that only one instance has the right to update a block at any one time.  Saving all changes in this
consistent manner protects data integrity.

The Integrated Distributed Lock Manager now supports the following features:

     Distributed Architecture - To provide superior fault tolerance and enhanced runtime performance, Integrated DLM
     maintains a lock database in memory, to keep track of database resources and locks held on those resources in varying
     modes. The lock database is distributed among all the instances.

     Fault Tolerance - For highest availability, Integrated DLM provides uninterrupted
service and maintains the integrity of the
     lock database in the event of multiple node or instance failures. The database
is accessible almost continuously as long as
     there is at least one Oracle Parallel Server instance active on that database.
This also enables instances to be started and
     stopped at any time, in any order.

     Lock Mastering - This determines which node will manage all relevant information about a given resource and its locks.
     The master node maintains information about the locks on all nodes interested in that resource. Different nodes act in the
     capacity of master node for discrete sets of lock resources. In the event of
a node failure, only the subset of lock resources
     mastered by the failed node needs to be recovered.

     Distributed Deadlock Detection - Integrated DLM performs deadlock detection on lock requests, and in the case of true
     deadlocks, takes appropriate action to ensure the continuous progress of database operations consistently.

     Lamport SCN Generation - This feature facilitates Lamport System Commit Number (SCN) implementation by
     transmitting causality across nodes through DLM messages. SCNs can be viewed as a database timestamp for ensuring
     proper transaction sequence.

     Group-Owned Locks - This feature provides dynamic ownership, since a single lock can be shared by two or more
     processes belonging to the same group.

     Persistent Resources - With Integrated DLM, resources maintain their state even if all processes or groups holding a lock
     on it have died abnormally.

Oracle Parallel Server takes full advantage of Oracle?s row-level locking, the finest
granularity of locking, to reduce contention on
database blocks and ensure high throughput. Contrary to database systems that perform block-level locking or table-level
locking, Oracle users only lock the rows they are updating. This allows other users to update different rows in the same block
without having to wait.

Row-level locking only affects the row being updated, so other rows in the table
can be modified simultaneously. Block-level locking affects all the rows in that block - even if users update just one row -
requiring other transactions to wait.  Table-level locking ties up the whole table,
so no other user can modify any rows in the table
until the transaction is completed.

Database systems use a mechanism called concurrency control to isolate transactions. Transaction isolation makes any
modification to data invisible to other transactions until the modifications are
committed. Concurrency control locks the data being
modified to prevent access by other transactions, then releases these locks when the transaction commits.

Oracle Parallel Server uses internal locking mechanisms for concurrency control to provide maximum concurrent access to data
for all transactions. Other databases for loosely coupled systems do not make a distinction between cache management and
transaction isolation. They use an external lock manager for both. As a result, the lock manager can become overloaded, and a
much larger number of I/O operations may be required.

Oracle Parallel Server uses the highly efficient concurrency control technique to optimize performance. The Integrated DLM is
used only for parallel cache management. The highly efficient internal row-level
locking technique, described earlier, is used for
concurrency control. This reduces the load on the Integrated DLM to minimize CPU and inter-node network overhead.

For example, consider two instances of the Oracle Parallel Server running on two
cluster nodes in the cluster shown in Figure 7.

Suppose instance 1, running on node 1, is performing an update on row i. Instance
1 will hold one row lock that only locks row i,
and one DLM lock that locks block n containing row i.

Suppose instance 2, running on node 2, initiates an update to a row that is in block
n. Instance 2 submits a request for block n to
the DLM. Instance 1 will release the DLM lock on block n without having to wait for the first transaction to commit.

Note that instance 1 still holds the row lock on row i. Now instance 2 can obtain
the DLM lock on block n. If instance 2 needs to
update a different row than row i, the update will be done right away.  Only if instance 2 needs to update the same row i, will
instance 2 be forced to wait until instance 1 commits that transaction. This technique is extremely fast and efficient.

Oracle Parallel Server benefits from robust generic Oracle features, such as non-blocking query. In certain cases these generic
features provide specific benefits to clustered systems. Oracle uses a non-blocking query technique to prevent processes that
need to read from blocking processes that need to write. Some database systems block transactions for any kind of data access,
locking rows even when the user is only reading data, i.e., readers block writers.
Oracle obtains no locks for reading data.  Only
processes that need to update acquire locks. Since most applications read many more times than they update the data, read locks
can significantly impact performance.

For example, if a user runs a report that queries 1,000 rows in a table, a read-locking
database system will lock all 1,000 rows.
Running the same report using Oracle places no locks on any of the rows.  This means other users can keep reading and updating
these 1,000 rows. The non-blocking query technique increases performance and allows Oracle Parallel Server to use fewer locks
compared to read-locking database systems.

Oracle also provides data consistency by using a non-blocking query technique. With non-blocking queries, the user sees the
image of the database at the time the query started. Suppose a batch process is running
a long report which started at 2:00 p.m. It
is now 2:05 p.m. and during the past five minutes, other users have modified some of the data accessed by the report. The
database will NOT lock all the rows used by the report process to guarantee the data
is consistent. Other users may proceed with
their database transactions while the report process is running, but the report process will continue to see the image of the
database as of 2:00 p.m..

This is done using a data structure called a rollback segment. If a row is updated
since the report started, the value of that row as
of 2:00 p.m. will be created using the information stored in rollback segments. Oracle
creates a complete image of the data as of
2:00 p.m., so the data in the report is consistent.

Oracle?s consistent read query, which is particularly suited to loosely coupled systems, allows queries on one node to proceed
independently of updates on other nodes. Oracle Parallel Server?s multi-versioning
buffer cache actually allows data accesses to
be satisfied by versions of blocks in one cache that are being simultaneously updated in another cache. Lock-based consistency
schemes that require readers to read and lock current data cannot be made to perform well on loosely-coupled systems, because
the distributed caches constantly ping updated data to disk to satisfy query accesses from other nodes. This is one reason why
only Oracle can provide high availability and scalability on cluster hardware architectures.

The component known as Group Membership Service (GMS) in Oracle8 has been integrated into the Oracle8i database as
Cluster Group Services (CGS). Functionally similar to GMS, CGS defines an enriched API to vendor supplied Cluster Manager
(CM) software. This API removes the earlier restriction of one GM instantiation per
cluster database by allowing the database to
directly interface with the CM. Cluster Group Services provides the following benefits:

     Improves usability of Oracle Parallel Server. No extra service such as GMS needs to be managed.  CGS is automatically
     started and shutdown when the database is started and shutdown.

     Permits multiple Oracle versions to co-exist on the same hardware cluster configuration.

Oracles Cluster Group Services interact with the hardware vendor specific cluster manager (CM) to keep track of which nodes
join or are removed from the cluster. The cluster manager, a vendor-supplied module, allows Cluster Group Services to track
which nodes are part of the parallel server database and maintain membership information. Cluster Group Services together with
the CM ensures data integrity by the following means:

     When a node is cast out of the cluster, the surviving nodes will not see any evidence of that node in the cluster, such as
     messages or writes to shared disk.

      For a given cluster, there is only one active set of nodes at all times. This
means there is never a ?split-brain? in the cluster
     - where each group thinks it comprises the cluster and accesses the database in an uncoordinated fashion - which can
     cause data corruption.

Oracle Parallel Server exploits all the robust single instance Oracle Fast Start?
Fault Recovery capabilities. For example, Fast
Start Checkpoint and Fast Start Rollback augment the availability model by providing
failover capabilities through clustering by


Expert Comment

ID: 6189603

Doc ID:  Note:52561.1
Subject:  SOLARIS: Configuring Oracle Parallel Server 7.3.x - 8.0.x on SUN PDB 1.2
 Content Type:  TEXT/PLAIN
Creation Date:  21-MAY-1998
Last Revision Date:  20-APR-2001

 To inform the reader of the details involved in running Oracle Parallel Server
 (OPS) on Sun PDB version 1.2
 You should refer to this document prior to installing and configuring Oracle on
 a Sun PDB system.
 Oracle7 Parallel Server Concepts & Administration (Release 7.x)
 Oracle8 Parallel Server (Release 8.x) Concepts & Administration
 Oracle7 (Release 7.x) Installation Guide for Sun SPARC Solaris 2.x
 Oracle8 Server (Release 8.x) Installation Guide for Sun SPARC Solaris 2.x
 Ultra Enterprise 2 Cluster Hardware Planning & Installation Guide, or
 Ultra Enterprise Cluster PDB Hardware Site Preparation, Planning & Installation
 Guide, or
 SPARCcluster PDB Hardware Site Preparation, Planning & Installation Guide

1.  Introduction to PDB 1.2
2.  Installation/configuration of PDB 1.2
3.  Volume Manager
4.  The UNIX DLM
5.  Differences between Oracle7 and Oracle8 OPS
6.  Installing Oracle
7.  Creating an Oracle instance
8.  Starting and stopping the system
9.  Useful commands
10. Useful files
11. Relevant Software packages
12. Standard system processes

1. Introduction to PDB 1.2
The Ultra Enterprise Cluster PDB system is a loosely-coupled configuration of
two Ultra Enterprise PDB Servers designed to provide a high availability
system for Oracle Parallel Server, Sybase MPP, or Informix-Online XPS.

Note - the term Ultra Enterprise PDB Server is a general term that refers to the
SPARCserver 1000, the SPARCcenter 2000, The Ultra Enterprise 2 Server, and the
Ultra Enterprise Server 6000, 5000, 4000, and 3000 models.

The benefits of coupled database servers are increased performance and a higher
level of database availability. The coupled systems use standard Sun hardware
and the coupling is achieved through software protocols.

An Ultra Enterprise PDB system consists of two nodes connected by two 100
Mbit/sec Ethernet links, or 100 Mbyte/sec scalable coherent interface (SCI)
links. Each node is a standard shared memory multiprocessor (SMP).

Database volumes are stored on either two or more SPARCstorage Arrays (SSA), or
two or more SPARCstorage MultiPacks. Each SSA is cross-connected to both servers
via a 266 Mbit/sec Fibre Channel optical links. MultiPacks are cross-connected
to both servers via SCSI-2 cables. Database volumes can be mirrored across
multiple storage devices for high availability.

Each server can be configured with a set of private disks to store its operating
system, local filesystems, and other data. Oracle Parallel Server uses the
shared disk configuration of the Ultra Enterprise PDB system. In this
configuration, a single database is shared amoung multiple instances of the
Oracle Parallel Servers, which access the database concurrently. Conflicting
access to the same data is controlled by means of a distributed lock manager
(the Oracle UNIX DLM).

Both servers run the Solaris 2.5.1 operating system. The Ultra Enterprise PDB
1.2 software package contains the software components required to run the
various supported database servers.

Processes running on different nodes must establish a mutual agreement on which
processes are live and participating in the distributed computation. A
consistent agreement on membership must be maintained by the system, even if
various simultaneous system and network failures exist. Each Ultra Enterprise
PDB node runs a cluster membership monitor (CMM) process. The CMM processes
exchange periodic heartbeat messages to detect changes in membership. When
memberships change (by nodes leaving or joining), the CMM processes
communicate and coordinate global reconfiguration of various system services
(for example, the DLM and Volume Manager).

Each node runs a cluster connectivity monitor process (CCM) to monitor failures
in the two internal network links. If one or more fails, the CCM automatically
reconfigures the networking drivers to use the remaining link for all traffic.
A link failover is transparent to the programs running on the nodes.

The following diagram shows the Ultra Enterprise PDB hardware configuration:

      |----------------------|                   |-----------------------|
      |        Node A        |                   |        Node B         |
      |                      |                   |                       |
      |                      | Dual 100 Mbit/sec |                       |
      |                      | Ethernet or 100   |                       |
      |                      | Mbyte/sec SCI     |                       |
      |                      | Internal networks |                       |
      |                      |                   |                       |
      |                     >|<----------------->|<                      |
      |                      |                   |                       |
      |                     >|<----------------->|<                      |
      |                      |                   |                       |
      |                      |                   |                       |
      |                      |                   |                       |
      |----------------------|                   |-----------------------|
            |        |    |                         |     |         |
            |        |    |   |---------------------|     |         |
            |        |    |---|------------------|        |         |
            |        |        | Optical links    |        |         |
         Public      |        | or SCSI-2        |        |      Public
        Ethernet   |------------|              |------------|   Ethernet
                   |  SSA or    |              |  SSA or    |
                   | Multipacks |              | Multipacks |
                   |------------|              |------------|

2. Installation/configuration of PDB 1.2
This operation will most likely be performed by a Sun software engineer, but is
outlined here for reference. See Ultra Enterprise 2 Cluster Hardware Planning &
Installation Guide for further details.

The following items must be determined before installing software:

- Hardware configuration
- Cluster name
- Node names
- System administration workstation name
- IP addresses for:
  Administration workstation
  Network terminal server
- Ethernet addresses (if using Ethernet interconnect)
- Disk space for:
  Solaris 2.5.1 operating system
  Ultra Enterprise PDB 1.2 software
  File systems
  Relational database systems
  Other applications
- Partition table for boot disks
- Disk mirroring table
- Disk striping table
- Host spares table
- List of current patches (Solaris 2.5.1 and PDB 1.2)
- Group controlling Oracle (default is dba)
- Quorum device
- NTS serial port numbers

Installation tasks:

1.  Prepare for software installation
2.  Gather the required system information
3.  Install Solaris 2.5.1 on the administration workstation
4.  Create /etc/clusters and /etc/serialports files
5.  Install the administration tools (PDB 1.2 client software) on the
    administration workstation (see note 1 below)
6.  Install required patches for PDB 1.2 client software on the administration
7.  Start the cluster console
8.  Install Solaris 2.5.1 operating system on both nodes of the cluster
9.  Install Ultra Enterprise PDB 1.2 server software on both nodes of the
    cluster (see note 1 below)
10. Install required patches for PDB 1.2 server software on both nodes of the
11. Perform additional steps:
    - Adding paths
    - Configure the rootdg disk group
    - Setup and configure disk groups and volumes
    - Configure UNIX kernel for Oracle (see chapter 6 for further details)
12. Configure the correct volume manager for the RDBMS used (in the case of
    Oracle, this is the vxvm cluster volume manager, providing shared disk
13. Install the Oracle UNIX DLM (see chapter 5 for further details)
14. Setup the OpenBoot PROM monitor
15. Reboot and start both nodes
16. Install the RDBMS software

Note 1. The PDB 1.2 client and server software can be installed using the
pdbinstall shell script, located on the Ultra Enterprise Cluster PDB 1.2
CD-ROM. This shell script is a wrapper that uses pkgadd, pkgrm etc. to install,
deinstall or check the installation of a PDB system. It will work interactively,
or in batch mode if no command line options are given. The following is a list
of questions that will be asked during an interactive installation of the PDB
1.2 server software:

- What is the name of the cluster?
- What is the hostname of node 0?
- What is the nodename of node 1?
- What type of network interface will be used (ether | SCI)?
- Will Oracle Parallel Server be used in this configuration?
- What is node0's first private network interface?
- What is node0's second private network interface?
- What is node1's first private network interface?
- What is node1's second private network interface?

If the network interface is ether:

- What is node0's ethernet address?
- What is node1's ethernet address?

If a disk is chosen as the quorum device:

- Disk address [tXdY] ?

3. Volume Manager

You can see in chapter 2 that the pdbinstall script asks if Oracle will be used
in the configuration. If so, the script will install the pdbadmin scripts that
specifically locate and start/stop the Oracle 7 UNIX DLM.

The Oracle 7 UNIX DLM therefore needs to be present for the pdbadmin start and
stop scripts to function correctly i.e. You have to install the Oracle 7 UNIX
DLM before you can join the nodes together as a cluster using pdbadmin.

The Oracle 7 UNIX DLM can be found on all Oracle7 and Oracle8 server CD-ROMs
from release onwards, and is often referred to in the Oracle
documentation as the "Parallel Server Patch". It is located in the ops_patch
directory, and is issued in pkgadd format to facilitate easy installation on
Solaris. It should be installed as the root user, as follows:

# cd <CD-ROM mountpoint>
# pkgadd -d ops_patch

Choose the ORCLudlm package, and follow the pkgadd instructions.

With each new release of Oracle7 or Oracle8, the ORCLudlm package is likely to
change. Therefore if you ever install a later version of Oracle on PDB 1.2, you
should install the package from the latest Oracle CD-ROM.

Here are details of the currently available ORCLudlm packages:

Oracle Server CD  Version              Date built  Notes
~~~~~~~~~~~~~~~~  ~~~~~~~              ~~~~~~~~~~  ~~~~~~~~~~~~~~~~~~~~         1.2                  14/03/97    Does NOT include gms         1.2                  17/06/97    Includes Oracle8 gms              26/09/97    Includes Oracle8 gms              05/12/97    Includes Oracle8 gms            08/04/98    Does NOT include gms            08/04/98    Does NOT include gms

The Oracle Sun Product group also maintain a "latest" release of the ORCLudlm
package, that "fixes all known bugs" instead of releasing bug-specific patches.

The package can be found on tcpatch.us,in /u01/patch/SUN_SOLARIS2/UDLM/LATEST
The latest release is:

Package Name      Version              Date built  Notes
~~~~~~~~~~~~      ~~~~~~~              ~~~~~~~~~~  ~~~~~~~~~~~~~~~~~~~~
ORCLudlm            24/09/98    Does NOT include gms

and fixes the following known bugs:

553797 : Processes hang in OPS environment when batch job executing
511995 : DLM hangs when doing deadlock detection for heavy OLTP workloads
378617 : ORA-600 [1153] occurs when recovering after node failure (deadlock
         detection problems)
524448 : Select across dblinks hangs
547019 : dlm experiences assert failures and core dumps
other  : Extra trace information for diagnosing further problems.

5. Differnces between Oracle7 and Oracle8 OPS
The Oracle7 UNIX DLM consists of a number (currently 3) separate processes,
which run on each node of the cluster. These processes are started automatically
by the pdbadmin command. The DLM is therefore external to Oracle7, and an
Oracle7 instance (with the Parallel Server Option linked in) will fail to mount
a database shared if the DLM is not running. There is only ever one "instance"
of a DLM (3 processes) running on a node at one time. Therefore if you have
multiple Oracle7 databases/instances on a node, they all share the same DLM.
The DLM is capable of managing multiple instances, and separates their work with
the concept of domains. By default the DLM manages a maximum of 9 domains,
meaning you cannot start more than 9 Parallel Server instances on a node without
first increasing the number of domains the DLM will manage (see chapter 14 for
further details).

In Oracle8 the DLM has been incorporated into the framework of the Oracle
background processes. There are now 2 additional background processes, LMON and
LMD0, which perform the equivalent tasks of the Oracle 7 DLM processes. Since
each instance has it's own DLM (communicating with the DLM for the other
instance that has the same database mounted shared), there is less chance of
hitting limits such as the maximum number of domains (see above). The
incorporation of the DLM into the Oracle8 framework also means that DLM
parameters are set in the instance's init.ora file, rather than separate
configuration files (as is the case with the Oracle7 DLM).

Note that in Oracle8 the LCK0 lock manager background process still exists (as
it does in Oracle7), and (like Oracle7) mediates between the database and DLM
for certain lock requests.

Although the Oracle8 DLM is now an integral part of the instance, it relies on
the presence of the Group Membership Services (GMS) daemon. This process is
also automaticaly started by pdbadmin, if the executable is present in the
/opt/SUNWcluster/bin directory. Otherwise, you have to start gms manually using
the executables in the $ORACLE_HOME/bin directory.

6. Installing Oracle

6.1. Create the dba group and oracle user

If not already present, as root, run:

# groupadd dba
# useradd -d <home dir> -s <login shell> -c "Oracle s/w owner" -g dba oracle
# useradd -d /export/home/oracle -s /bin/ksh -c "Oracle s/w owner" -g dba oracle

If not already present, create the local bin directory:

# mkdir -p /usr/local/bin

6.2. Configure sufficient shared memory and semaphores

On both nodes -

As root, ensure /etc/system contains the following entries:

set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmsl=100
set semsys:seminfo_semmns=1000
set semsys:seminfo_semmni=70

If not, add/amend them, and reboot. After the reboot, ensure the changes have
taken effect by running:

# /etc/sysdef

6.3. Install the Oracle7 UNIX DLM

On both nodes -

First shutdown ALL instances.

Shutdown the pdb software. As root, run:

# /opt/SUNWcluster/bin/pdbadmin stopnode

Backup the current DLM configuration file (if present):

# cd /etc/opt/SUNWcluster/conf
# cp <cluster_name>.ora_cdb <cluster_name>.ora_cdb.save

Install the DLM software:

# cd <location of ORCLudlm.tar file>
# cd /cdrom/oracle805/ops_patch

# cp ORCLudlm.tar /tmp
# cd /tmp
# tar xvf ORCLudlm.tar
# pkgadd -d . ORCLudlm

Copy back the original DLM configuration file (if present):

# cd /etc/opt/SUNWcluster/conf
# cp <cluster_name>.ora_cdb.save <cluster_name>.ora_cdb

Ensure the DLM configuration file contains valid entries. A good example is:

oracle.maxproc  : 1500
oracle.maxres   : 30000
oracle.maxlock  : 60000
oracle.dba.gid  : dba
oracle.useISM   : 1

Startup the pdb software:

# /opt/SUNWcluster/bin/pdbadmin startnode

6.4. Install Oracle software

Much the same process as installation for single instance/node, with 3

a) The installer detects the presence of pdb software, and gives you the option
   to copy all s/w installed on one node, to the other. This is a useful
   facility (saves mounting & installing the same cd & s/w twice), but note it
   relies on rsh/rcp, so you must first configure a .rhosts file for the oracle
   user (on both nodes)

b) You must choose/install the "Parallel Server Option" aswell as everything
   else you require

c) It's unlikely you'll want to create a database aswell at this stage
   (i.e. choose "Install New Product - Do Not Create DB Objects"). This is
   because the default blocksize of 2Kb is unlikely to be appropriate, and
   you probably won't have created any shared volumes (for datafiles) yet.

Note, from 7.3 onwards, the Oracle installer has a Motif interface, which is
alot easier to use than the older character version. Login as the oracle owner
and run:

# DISPLAY=<hostname of management workstation>:0.0
# cd <cdrom mount point>/orainst
# cd /cdrom/oracle805/orainst
# ./orainst /m

Choose the following

Install Type = Default Install
Installation Activity Choice = Install, Upgrade, or De-Install Software
Installation Options = Install New Product - Do Not Create DB Objects

ORACLE_BASE=<mount point>/app/oracle

Answer "Yes" to the prompt to install the products on all nodes, supply the
second node name, followed by a blank entry to finish.

From the Software Asset Manager screen, choose the products to instal.
Experience shows that your list should include (as a minimum):

Oracle 7
Oracle Unix Installer
Oracle7 Distributed Database Option
Oracle7 Parallel Query Option
Oracle7 Parallel Server Option (licensable)
Oracle7 Server (RDBMS) (licensable)
SQL*Net V2
SQL*Plus (licensable)
TCP/IP Protocol Adapter (V2)

Others you might wish to consider:

Advanced Replication Option
Oracle Intelligent Agent (for Enterprise Manager)
Oracle On-line Text Viewer (Provides a basic html viewer)
Oracle Server Manager (Motif)

Oracle 8
TCP/IP Protocol Adapter
Oracle8 Parallel Server Option (licensable)
Oracle8 Parallel Server Management (opsctl)
Oracle UNIX Installer
Oracle8 Enterprise (RDBMS) (licensable)
SQL*Plus (licensable)

Others you might wish to consider:

Oracle Intelligent Agent (for Enterprise Manager)
Oracle On-Line Text Viewer (Provides a basic html viewer)
Oracle8 Objects Option (provides support for objects) (licensable)
Oracle8 Partitioning Option (Provides support for partitions) (licensable)

9. Useful commands
To use these commands, and to obtain access to their man pages, you need to
ensure the following:

PATH includes /opt/SUNWcluster/bin:/opt/SUNWvxva/bin
MANPATH includes /opt/SUNWcluster/man:/opt/SUNWvxva/man:/opt/SUNWvxvm/man
Run catman to recreate the windex database


  ELF executable. The Veritas Visual Administrator (VxVA) graphical user
  interface. It provides a Motif-based interface to the objects of the Veritas
  Volume Manager (VxVM).

  Usage: vxva [-h] [-t] [-mono]
        [-view view_dir]
        [-display host:server.screen]
        [-geometry widthxheight]
        [-fg color] [-bg color]
        [-title string]
        [-xrm resource]


  ELF executable. The cluster manager program. Used to start, stop, reconfigure,
  and query all aspects of the PDB cluster. It is not usually necessary to call
  this command manually, since it's functionality is incorporated into other
  commands (i.e. pdbadmin, get_node_status).

        clustm stop clustname { nodeid | this }
        clustm abort clustname { nodeid | this }
        clustm stopall clustname
        clustm abortall clustname
        clustm reconfigure clustname
        clustm getallnodes clustname
        clustm getcurrmembers clustname
        clustm getstate clustname
        clustm hasquorum clustname
        clustm getlocalnodeid clustname
        clustm ismember clustname nodeid
        clustm getseqnum clustname
        clustm dumpstate clustname


  ELF executable. Command line program to interrogate the current DLM lock
  database. It uses the lk_sync_qry_.... functions from the DLM library
  /opt/SUNWcluster/lib/libudlm.so. Care should be taken when interpretting or
  acting upon it's output, since it displays a "snapshot" of the current state,
  which could very quickly change.

  Usage:dlmdump [options]
            -h                            Help/Usage
            -l <lock pointer>             Lock Object
            -r <resource pointer>         Resource Object
            -p <process id>               DLM client pid
            -d <rdomain pointer>          Rdomain Object
            -g <group id>                 DLM group pid
            -P <process pointer>          Process Object
            -R <resource name>            resource name(string)
            -D <rdomain name>             rdomain name(string)
            -O <i1> <i2> <types> <dbname> Oracle Format resname
            -a <res/lock/proc/pres/rdom>  all <res/lock/proc/pres/rdom> pointers
            -a convlock                   all converting lock (pointers)
            -a convres                    all res ptr with converting locks


  dlmdump -a proc

    List all process pointers

  dlmdump -P <process pointer>

    Display information about a DLM client process (including lock pointers on
    it's granted queue). Note, this command is the same as "dlmdump -p <pid>".

  dlmdump -l <lock pointer>

    Display information about a particular lock (including it's associated

  dlmdump -r <resource pointer>

    Display information about a particular resource.


  ELF executable. Command line program to display DLM statistics. It uses the
  lk_sync_qry_stat and lk_sync_ctl_stat functions from the DLM library

  Usage: dlmstat parameters
    -h      Help/Usage

    -p [-a|-d|-g|-z] <pid> -r [-a|-d|-g|-z] <res> -t <sec> -n <# of times>
            Add, delete, get or zero out process/resource structure .
            Set pid/res to 0 to zero out all process/resource structures.
            Specify the time (seconds) of each interval with -t
                and the number of intervals with -n.

    -i [-q|-w] -m -t <sec> -n <# of times>
            Query or wipe out instance-wide statistics.
            Display lock messages statistics with -m
            Specify the time (seconds) of each interval with -t
                and the number of intervals with -n.


  dlmstat -iq

    Query instance-wide statistics.

  dlmstat -iqmt 5 -n 2

    Query instance-wide and lock message statistics twice, at 5 second

  dlmstat -pa <pid>

    Start collection of process statistics for process <pid>. Note, the process
    must be a DLM client (i.e. using the DLM, and shown by "dlmdump -a proc")

  dlmstat -pg <pid>

    Query process statictics for <pid>. Note, statistics must have been
    previously enabled using "dlmstat -pa".


  ELF executable. Command line program to enable DLM tracing.

  Usage: dlmtctl [options]
   -h                Help/Usage
   -s                Show trace control status
   -l <level>        Set tracing on at the specifed level
   -n                No tracing - turn tracing off  
   -g y              Put trace information in global tracefile
   -g n              Put trace information in local tracefile

10. Useful Files


  Most pdb commands accept a clustername argument (e.g. pdbadmin startnode). In
  most cases, this argument can be ommited, in which case the default
  clustername held in this file will be used.


  The template cluster configuration file, created by the installation of
  package SUNWpdbcf.


  This is the main cluster configuration file, created during the installation
  of package SUNWpdbcf, and based on the TEMPLATE.cdb file (see above), which
  entries such as clustername and node names replaced by real values.

  Do NOT modify this file manually. The only supported method is to use the
  pdbconf utility.


  Referenced by the <clustername>.cdb file as the "udlm.oracle.config.file".
  This is the cluster configuration for the Oracle UNIX DLM. It specifies the
  maximum number of processes, resources and locks the DLM will manage, aswell
  as the dba group, and whether Intimate Shared Memory (ISM) should be used by
  the DLM processes.

  This file CAN be edited manually and changes will take effect the next time
  the node joins the cluster.

  Refer to the man page for "ora_cdb" for further details.

/var/opt/SUNWcluster/dlm_<node name>/cores/core<pid>/core

  Core file generated by a failure within the DLM executable
  /opt/SUNWcluster/bin/lkmgr. Analyze it with a debugger such as adb or dbx.


  Oracle UNIX DLM log file. Is written to by a number of processes which
  interact with the DLM. Shows reconfiguration events and errors.


  Oracle UNIX DLM global trace file. Is written to by a number of processes
  which interact with the DLM, when DLM tracing is enabled (using dlmtctl). By
  default NO tracing is enabled, and this file is therefore empty.

  If tracing is enabled to local trace files (rather than the global trace file
  above), there will be 3 trace files called dlm_<pid>.trc, where <pid>
  corresponds to the process id's of the 2 dlmd, and the dlmmon processes.


  Written to by just the 3 DLM processes, and shows DLM startup information
  (including shared memory usage, based on the number of locks, resources,
  processes etc.)


  Log file for the ogmsctl process which last started the ogms daemon


  Log/trace file for the ogms daemon. If tracing is enabled (using ogmsctl),
  additional information is written to this file.

11. Relevant Software Packages
Both nodes of cluster (Server software):

ORCLudlm        Oracle UNIX Distributed Lock Manager
                (sparc) Oracle

ORCLudlm.2      Oracle UNIX Distributed Lock Manager
                (sparc) Oracle UNIX Distributed Lock Manager

SUNWccm         Ultra Enterprise PDB Cluster Connectivity Monitor
                (sparc) 1.2,REV=1.2_FCS

SUNWcmm         Ultra Enterprise Cluster Membership Monitor
                (sparc) 1.2,REV=1.2_FCS

SUNWdmond       Ultra Enterprise PDB System Monitor - Server Daemon
                (sparc) 1.2,REV=1.2_FCS

SUNWff          Ultra Enterprise FailFast Device Driver
                (sparc) 1.2,REV=1.2_FCS

SUNWpdb         Ultra Enterprise PDB Cluster Utilities
                (sparc) 1.2,REV=1.2_FCS

SUNWpdbcf       Ultra Enterprise PDB Configuration Database
                (sparc) 1.2,REV=1.2_FCS

SUNWscman       Ultra Enterprise PDB Man Pages
                (sparc) 1.2,REV=1.2_FCS

SUNWudlm        Ultra Enterprise PDB UNIX Distributed Lock Manager Interface
                (sparc) 1.2,REV=1.2_FCS

SUNWvmdev       SPARCstorage Volume Manager (header files)
                (sparc) 2.2

SUNWvmman       SPARCstorage Cluster Volume Manager (manual pages)
                (sparc) 2.2

SUNWvmman.2     SPARCstorage Cluster Volume Manager (manual pages)
                (sparc) 2.2,PATCH=4

SUNWvxva        SPARCstorage Volume Manager Visual Administrator
                (sparc) 2.2

SUNWvxvm        SPARCstorage Cluster Volume Manager
                (sparc) vxvm2.2/cvm2.2

System administration workstation (Client software):

SUNWccon        Ultra Enterprise PDB Cluster Console
                (sparc) 1.2,REV=1.2_FCS

SUNWccp         Ultra Enterprise PDB Cluster Control Panel
                (sparc) 1.2,REV=1.2_FCS

SUNWclmon       Ultra Enterprise PDB Cluster Monitor
                (sparc) 1.2,REV=1.2_FCS

SUNWpdbch       Ultra Enterprise PDB Common Help Files
                (sparc) 1.2,REV=1.2_FCS

SUNWpdbdb       Ultra Enterprise PDB Serialports/Clusters Database
                (sparc) 1.2,REV=1.2_FCS

12. Standard system processes
When both nodes are successfuly joined as a cluster, you should see the
following processes running (ps -fu root):

root     9     1  vxconfigd -m boot
root   329     1  /sbin/sh - /usr/lib/vxvm/bin/vxsparecheck root
root   334   329  /sbin/sh - /usr/lib/vxvm/bin/vxsparecheck root
root   335   334  vxnotify -f -w 15
root  1091     1  /opt/SUNWcluster/bin/clustd -f /etc/opt/SUNWcluster/conf/
root  1121     1  dlmmon -n 2 -c /etc/opt/SUNWcluster/conf/<clustername>.cdb
                  -i 0
root  1129  1121  dlmd
root  1130  1121  dlmd
root  1394     1  ogms
root  1526     1  /opt/SUNWcluster/bin/ccmd -w -m 0 -c /etc/opt/SUNWcluster/

vxconfigd -m boot

This process is owned by init (1) and runs the /sbin/vxconfigd executable. This
is the Volume Manager configuration daemon, responsible for the initialization
of Volume Manager at system boot, and subsequent maintenance of disk and disk
group configurations.

/sbin/sh - /usr/lib/vxvm/bin/vxsparecheck root

You will see two of these processes. The parent is owned by init and started by
the /etc/rc2.d/S95vxvm-recover script. However, it's the child (shell) process
which does the work. It monitors the Volume Manager by analyzing the output
of the vxnotify command, waiting for it to report failures. Upon detecting a
failure, it'll send a mail item to root, and attempt to replace and reconstruct
the disk/plex/subdisk/volume with a hot-spare.

vxnotify -f -w 15

This process is owned by the child vxsparecheck process, and runs the
/usr/sbin/vxnotify executable. It alerts it's parent to plex, volume and disk
detach events.

/opt/SUNWcluster/bin/clustd -f /etc/opt/SUNWcluster/conf/wcss-uk-pdb.cdb

This process is owned by init (1), started by the "pdbadmin startnode" command,
and is the cluster membership monitor (CMM) process (see chapter 1 for further

/opt/SUNWcluster/bin/ccmd -w -m 0 -c /etc/opt/SUNWcluster/conf/wcss-uk-pdb.cdb

This process is owned by init (1), started by the "pdbadmin startnode" command,
and is the cluster connectivity monitor (CCM) process (see chapter 1 for further

dlmmon -n 2 -c /etc/opt/SUNWcluster/conf/wcss-uk-pdb.cdb -i 0

This process is owned by init (1), and is the first of the 3 UNIX DLM processes
to be started by "pdbadmin startnode". It is actually running the
/opt/SUNWcluster/bin/lkmgr executable. This process performs the task of the DLM
lock monitor.


There are two of these processes, both owned by the DLM lock monitor process.
Again, both these processes are actually running the /opt/SUNWcluster/bin/lkmgr
executable. These processes perform the tasks of the DLM connection manager and
lock daemon.


Expert Comment

ID: 7033773
Please update and finalize this old, open question. Please:

1) Award points ... if you need Moderator assistance to split points, comment here with details please or advise us in Community Support with a zero point question and this question link.
2) Ask us to delete it if it has no value to you or others
3) Ask for a refund so that we can move it to our PAQ at zero points if it did not help you but may help others.


Thanks to all,
Moondancer - EE Moderator

P.S.  Click your Member Profile, choose View Question History to go through all your open and locked questions to update them.

Expert Comment

ID: 7045969
Force accepted

** Mindphaser - Community Support Moderator **

Featured Post

Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

By default, Carbonite Server Backup manages your encryption key for you using Advanced Encryption Standard (AES) 128-bit encryption. If you choose to manage your private encryption key, your backups will be encrypted using AES 256-bit encryption.
VM backup deduplication is a method of reducing the amount of storage space needed to save VM backups. In most organizations, VMs contain many duplicate copies of data, such as VMs deployed from the same template, VMs with the same OS, or VMs that h…
This tutorial will walk an individual through configuring a drive on a Windows Server 2008 to perform shadow copies in order to quickly recover deleted files and folders. Click on Start and then select Computer to view the available drives on the se…
This tutorial will show how to configure a single USB drive with a separate folder for each day of the week. This will allow each of the backups to be kept separate preventing the previous day’s backup from being overwritten. The USB drive must be s…

809 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question