<

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x

How to setup Oracle Standard Edition Database Consolidation with RAC and KVM

Published on
10,427 Points
4,427 Views
Last Modified:
Approved

Overview:

We migrated from 11gR2 Oracle Enterprise to 11gR2 Oracle Standard Edition to reduce costs and increase performance and availability.Visual of improvements 63x drive io 16x memory 5x cpu 25% backup 82% cost reduction in support.

Goals:

Our first goal is to increase performance by increasing the number of CPU cores, memory, and increasing IO throughput on the disk systems.  We had performance problems with our previous system.
Our second goal is to increase availability by adding snapshots as backups.   This provides us 4 independent backups.  Look for a document from us about the details of these backups in the near future.  The four backup systems are:

1.

Online Hot backups using RMAN to create weekly Level 0 and daily Level 1 incremental backups.  These are backed up from the server by IBM Tivoli Storage Manager and kept for 90 days.

2.

Online Hot logical backups using data pump.  Each backup is archived and kept forever.  Note that the parallel parameter is not available in standard edition however, I have a script that accomplishes parallel logical dumps and restores in Standard Edition.

3.

A Level 0 backup is kept offsite along with daily FTP shipments of Archive logs. The Level 0 is updated monthly.
Partitioning and Disk speed:
Raid 10 may not be fast enough.  So Try RAID 20 or 30 or 40.  How?  Create 2, 3 or 4 RAID 10 arrays and combine them using RAID 0 in the OS.  Or combine them using ASM with external redundancy.

4.

Online Hot Snapshots are taken twice daily and kept for 3 days.Note: Oracle Standard Edition is capable of archive logs and hot backups since at least 2005.
Our third goal is to provide an upgrade path that does not involve new hardware purchases.  In our organization we have rules that make the purchase of hardware difficult.  Also the servers we purchased for this project are more expensive than other servers we have so the likelihood of having a new server to upgrade to is slim.  By using Red Hat KVM we can provide a way to resize virtual machines and create new virtual servers.
Our fourth goal is to reduce costs.  Oracle Standard Edition Database is licensed by CPU socket instead of by Cores.  At first it took a little thought to see that we did not actually need the oracle enterprise license.   Our applications do not depend on Enterprise features.  We had used Partitioning but with faster storage we felt this was not as necessary.  We had used the Performance and Tuning packs but Dell Quest Toad DBA Suite is a viable replacement.  Also Archive logging is important but Standard edition also does that.  Red Hat KVM is fast and cheaper than VMWare.  With KVM being newer there were a few gotcha’s but nothing we could not handle and the future seems bright for KVM.  By consolidation we combine eight databases from 4 physical servers onto two physical servers also reducing the number of Oracle Licenses.

Hardware and Software mix:

Two IBM servers x3690 Intel 86-x64 systems with 2 10-core Intel Xeon Model E7-2870 chips and 256GB of memory each and 146G SAS mirrored drives and two 1TB SATA mirrored drives and two SAS HBA’s with external ports for external storage.  
Depiction of two servers with a shared storage unit.Also an old server 2 sockets, 86x64 with 2 GB ram and 100G (local raid with hot spare) drives for the Virtualization Management station.
IBM DS3524 storage system and EXP3524 with 3-200 GB flash drives, 24-600 GB 10k drives 21-300 GB 15k drives and 4 SAS connections for host computers for the Oracle RAC system.
We used Red Hat Enterprise Virtualization server (RHEV Hypervisor - 6.4) four sockets.
We used Red Hat Enterprise Linux V5.9 (RHEL) four sockets for the virtual machines.
Oracle Standard Edition 2 sockets (Oracle counts Intel 86 as half a socket.)
We used Red Hat Enterprise Linux V6.4 (RHEL) two sockets for the Virtualization Management system.
We used a PC for the DBA and System administrator with IBM’s DS Storage Manager 10 software.

Configuration Overview:

When we speak of Snapshots we exclusively mean storage snapshots.  KVM has snapshotting also but we do not use it.
The RHEV hosts are to boot from the local 146 GB mirrored drives.  The 1TB mirrored drives are for swap.  As it turns out the configuration of the auto install of RHEV does essentially what we intended without any need for further configuration.  The RAIDS are both hardware implemented.
The REHL virtual machines will boot from the Storage devices and so there is a single LUN (Logical Unit Number that represents a “hard drive” presented from the storage system to a host server) for both servers.  The means of attaching this LUN is different from all the other LUNs and is described in a later section.
The databases have a single LUN each, other than the production databases.  The largest production databases have 3 LUNs one for the majority of the structured data and indexes and one for REDO logs and control files and a third for BLOB data and indexes.
The Management server boots from Local Raid with a hot spare.
The memory use of the two virtual systems is configurable and changeable allowing us our upgrade path.

Receiving and setting up the equipment:

The equipment arrives and we assembled the servers and mounted them on a Rack along with the storage.  We connected the SAS connections from both new servers to the DS 3524.  Also we connected 3 Ethernet wires to each server.  2 Ethernet connections on each server were for NICs and the 3rd was for IBM’s BMC computer.

Setting up Storage:

The first step was to install IBM’s DS Storage Manager 10 software on the system admin PC.  The IBM’s DS Storage Manager 10 software is a download from the IBM website.
Next we updated the firmware on the DS3524 to the latest available on the IBM website. This updates the Storage managers 10 software display for new features in the firmware.
We then created 3 arrays and 2 storage pools.  I could have set this up as an array and 2 storage pools but I wanted greater control over drive speeds.  
•      Array 200, is composed of 2-200GB flash drives I set that up as a mirrored pair with a hot spare.
•      Array 300, is composed of 10-300GB 15k drives in RAID 10 with 2 hot spares.
•      Array 600, is composed of 4-600GB 10k drives in RAID 10 with 2 hot spares.
•      Pool 300, is composed of 11-300GB 15k drives.
•      Pool 600, is composed of 18-600GB 10k drives.
The array drives should be faster than the Pool drives given that all will be nearly full.  Thus the production databases will be on the Array 300 for Data and Indexes and Array 600 for BLOBs with backups and test and development databases on the Pools.  The flash array 200 is used for REDO logs for the Production databases.
Next I created LUN’s on the arrays and pools for each of the databases and backups.  The Backup is on the POOL 600 and I plan to use Oracle’s ASM ACFS file system.
Do not attach storage to the servers yet.  Unless you plan to boot your hosts from the storage system then you should attach those boot LUNs.  We chose to boot from local drives.

Installing Red Hat Virtualization RHEV and the management station:

I attended the Red Hat Enterprise Virtualization Class which was valuable but not sufficient.  If you can it is a good idea to attend the class.  I will attempt to convey enough detail in this document to enable you to setup a similar system.
Management station installs:
The installation of Red Hat Enterprise Linux (RHEL) on the management station was performed as a Desktop Server install, then modified to add the REVM software via YUM.
As root perform the Install KVM Manager System steps for a basic installation that gives you access to the management website.
Install KVM Manager System
yum – y update
yum –y install rhevm rhevm-reports
rhevm-setup
Answer the following questions as appropriate.
Stop ovirt-engine service : yes
Override current httpd configuration: yes
HTTP port: 80
HTTPS port: 443
Host fully qualified host name:  rhevm.example.com
Password for Administrator (admin@internal):  password
Note remember the password you use for the above.
Organization Name :
Default Storage type : NFS
DB type: local
Local database engine password: password
Note remember this as well.
Configure NFS to be used with ISO domain: yes
Local ISO domain path: /exports/rehvisos
Configure IP tables: no
Note: I installed with Firewalls off and also turned off pam and secure Linux systems for simplicity and given that we have a corporate firewall.
Note our domain was not joined as it is not under my control.
rhevm-dwh-setup
rhevm-reports-setup

Open in new window

The KVM Management (RHEV-M) website is https://rhevm.example.com/ or whatever fully qualified host name you entered.  
Some notes about the KVM Management website:
•      This site works best in Firefox or Chrome web browsers and does not function in IE8.
•      This site works on Linux for the integration of SPICE.
•      However I use a windows PC for all but SPICE which I can use when needed on the management station console.  I use VNC with the Virtual hosts on my windows PC.  SPICE has the advantage of showing boot screens so is needed during the OS installs of the RHEL virtual systems.
•      Data Center – The top level organizational object in RHEV-M. A data center contains all the physical and logical resources in a single managed virtual environment. Including Host computers, Storage systems and virtual machines and logical networks and storage domains.
•      Cluster – The second level organizational object in RHEV-M. Hosts the physical nodes in a data center, are grouped into clusters. A cluster is a group of hosts that share the same resources such as logical networks and storage domains.

Red Hat Virtualization RHEV installs:

Simply boot the servers on the RHEV Installation DVD and follow your nose. This install has one catch about the NIC setup explained below.
After Installing the RHEV system you will be presented with a configuration menu with several items.  We used 3 of the items shown:
NETWORK: In our installation we setup the network for One NIC only.   We do use both NICs but don’t attempt to setup the second NIC at this time.   To get the second NIC to function for the RAC interconnect we do this from the management station web site later.
RHEV-M:  This entry connects to the RHEVM station we setup earlier using the management stations IP address and port 443.  A username and password was not needed. We also checked the box for Connect to RHEV-M and validate certificate.  You will need to approve the certificate after choosing apply.
Red Hat Network:  This entry connects the RHEV system to the Red Hat support network.
Next we went to the KVM Management Website to approve the new hosts.  Simply chose the Web admin portal and login using the credentials for Admin you set up during the install of RHEVM with the domain name “internal”.  Then navigate to the Hosts tab and right click on the hosts to choose approve.

Configuration of KVM Management:

In the KVM management site (admin portal) we can setup the system.  
First select the “Data Centers” tab and click new to create a new Data Center.  I named ours DC_Oracle for data center oracle.  This is a required step as you should not use the Default data center in the system also the Default system is required so do not delete it.  For the properties of the data center since we are using SAS I chose Fiber Channel for the Data Center Type.  Note this is a work around that is needed to make the SAS system cluster able in the KVM system.   If you do not make this setting then you will not be able to mark the drives shareable and therefore not be able to setup your RAC cluster.
Second select the “Clusters” tab and click new to create a new cluster in your data center.  You will need to look up the name associated to your CPU chips in the Host to set the properties of the cluster for the CPU Name field.
Third select the “hosts” tab and right click the hosts to put them into maintenance mode if not already there.  Once in maintenance mode you can edit the hosts to put them into your new data center and cluster.
Forth setup the Logical network. This will enable the network interconnect for the RAC cluster on the second NIC’s of the host servers.  Navigate to the data center tab and select the datacenter you created.  Then click on the Logical Networks tab on the bottom part of the web page.  Click New to create a new network in addition the RHEV network shown. I named the new network as Private and then attach the new logical network to your new cluster.  Then in the “Hosts” tab click the first host and in the lower tab for “Network Interfaces” then click on “Setup Host Networks” this pops up a window.  Drag the private network to the unused NIC card to assign the NIC card to the private network.  This sets up a bridge for the VM’s to connect to for their NIC to create the Private network.

Configuration of KVM VM OS Storage:

The purpose of this storage is to boot the Virtual Machines and is configured differently from the storage for the databases.  The IBM Storage manager supports clustered servers using the host group to be able to attach shared storage drives.
In IBM Storage Manager create the host entries for the two host machines in a single host group.  I named my host group “Oracle” this allows the LUNs to be shared between the servers, which is required for every single LUN.  Simply attach the LUNs to the HOST group for Oracle when needed.
First in storage manager attach only one LUN in my case I had one LUN to boot both virtual machines.  In your case you may choose to use two or more LUNs in either case attach them one at a time following these steps.
Second reboot both RHEV systems so they will detect the new drives.  (There will be a lot of reboots)
Third use the KVM Management site.  Navigate to the “Storage” tab for the data center your created earlier.  Choose new to add a new storage domain.  Select the newly attached LUN.  Since the LUNs show only their size then when attaching many LUNs of the same or similar size I do this one at a time to insure that the LUNs are correctly utilized. Failing to do this could result in a name in the Storage manage not matching its use and foul your snapshotting strategy.
If you have more boot LUNs for virtual machines you may repeat the above steps.

Configuration of the KVM ISO Storage:

The purpose of this storage is to store Disk Images (ISO) files for the installation of virtual machines.  When we installed the KMV management software one of the questions we answered was about setting up this storage as an NFS system.
In the KVM Management web site.  Choose storage and add new storage domains choose ISO/NFS and I named mine as ISO.  Enter the address of the NFS site “10.21.2.55:/exports/iso” using the IP of the management station or other location if you so desire.
Using SSH telnet or local terminal connect to the KVM Management station server.  Looking in the exports/iso directory you will find a path similar to this.
/exports/iso/881018ea-19da-443c-8714-03d3d17ded71/images/11111111-1111-1111-1111-111111111111
The numbers may be different but the structure will be similar.  Copy your ISO images to this directory. You may do this by SFTP or other means as you deem appropriate
Then in your terminal session set the ownership of the structure and files to the vdsm user and the kvm group.  Like so:
cd /exports
chown –R vdsm:kvm iso

Open in new window

After setting the permission you should be able to see the ISO file in the KVM Management website by navigating to the “Storage” tab clicking on the ISO Storage Domain and then the image tab in the lower part of the web page.

Creation of Virtual machines:

They key part of this step is the NIC setup for the network interconnects. We will assign the second NIC in the virtual machine to the private logical network in the KVM system.  
First we navigate to the “Virtual Machines” tab and click “New Server” we named the server and insure it is in the Data Center and Cluster we created.  Also we set the memory size to 10G smaller than the Hosts memory (it seems the host needs 10G of memory to run without warning messages). Note the memory setting here is dynamic so you can change the memory size of a running server.  In our case assuming we are reducing memory we will also want to change the “Memory Target” initialization parameters in the databases first then change the server memory.  Such an operation might be needed to create a new server on the same machine for the purpose of installing a new version of the database that in turn requires a newer OS.  Also we set the number of CPU cores, I found it necessary to open the advanced parameters for the CPU settings to force it to use two sockets and all the cores split between the sockets.  The default is two sockets per core which interferes with the Oracle standard Edition installation later.
Second we use the “Guide Me” window for the virtual machine to add disk space for booting.  When we add a disk for the OS we choose “Internal” and enter an alias and description.   Choose the bootable option.  Also set the size large enough for the oracle install and the Server’s memory paging.  In our case we choose 350G based on 100G for the OS and Database Install and 250G for memory paging.  The Oracle installation requires that we have enough space on disk to support paging (saving) all of the memory to disk.
Third we setup the NIC Cards.  The first on is the public address so choose the RHEVM (defaulted) network.  The second NIC card is the network interconnects so choose the private network we created

Installation of Red Hat OS (RHEL) on a virtual machine:

For the virtual machines I chose to install Red Hat 5.9 as it is compatible with Oracle’s cluster ware. I found that Red Hat 6.x is not when trying the installation despite the fact that Oracle RAC 11.2.0.3 is certified with Red Hat 6.  Of course this may change over time.
The Run Once option in RHEV is designed to allow a file to mount as a cd or dvd rom image to boot the virtual machine and install an operating system.  This is called the Run Once option as it should not be necessary to boot the virtual machine more than once on OS install software.
To start the installations choose the Virtual machine in the “Virtual machines” tab and select “Run Once”.  This brings up a window where you can select the CD to point to the ISO file we uploaded earlier.  Choose the Attach CD check box and use the drop down to select the ISO file.  When you choose OK then go to a Linux machine to run the KVM Management web site using Fire Fox to be able to run the SPICE interface to see the server boot and install the OS.  The Terminal of the KVM Management station is a good choice.  I found that attempting to use SPIC via VNC simply does not work well enough to install an OS.  To start the SPICE session right click on the Virtual Machine and choose view console.
I choose to control the disk partitioning to insure a large enough swap space for the virtual machines to run Oracle which is the same as the memory in the virtual machine for sizes above some much lower limit.
On the NIC card setup insure that the first card gets the public address and the second card gets the private address you intend to use.
For package installation I choose custom and select the following:
Under Base system
Administrative tools
Base
Java
Legacy Software Support
System Tools
X Window system
Desktop Environments
GNOME desktop environment
Development
Choose all.
Application
What ever you want.
Servers
FTP only.
Also I do not update the server but I do register it with Red Hat Network.  This is because update can prevent the Oracle installations from completing.  However some update is required to install options needed by oracle.

Configuration of Red Hat (RHEL) Virtual machine to run oracle:

Create the Oracle user and home directory
Issue these commands to create the oracle user and its directories:
groupadd dba -g 501
mkdir -p /u01/home
useradd -c “Oracle Software Owner” -G dba -u 1115 –d /u01/home/oracle -m oracle
chown -R oracle:dba /u01

Open in new window

Note: On our servers we had to match the user id to be able to create files on the other systems.  On your system you may wish to use different id’s as oracle suggests in their directions.

Configure Swappiness

Put this line
echo 100 > /proc/sys/vm/swappiness

Open in new window

into /etc/rc.d/rc.local
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.

touch /var/lock/subsys/local
echo 100 > /proc/sys/vm/swappiness

Open in new window

Configure /tmpfs

In /etc/fstab allocate all the memory for Oracle to /tmpfs
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults,size=190G 0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-vda2         swap                    swap    defaults        0 0

Open in new window

Configure inittab run level

Edit /etc/inittab for default run level 3.  This saves memory and CPU overhead.  You can still run xwindows later by the command startx.
FTP service
Note: The public ip addresses that begin with 10. are site specific to our site at other locations you will have differing ip addresses.
You may follow the steps below for configuring vsftpd with SSL in RHEL 5:
(a) As a preliminary step, ensure that you make a back-up of the vsftpd configuration file.
# cp  /etc/vsftpd/vsftpd.conf  /etc/vsftpd/vsftpd.conf-back

Open in new window

(b) Create a self signed certificate using the below command. This certificate will be used in the SSL connections between the vsftpd server and client.
# openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout /etc/vsftpd/vsftpd.pem -out /etc/vsftpd/vsftpd.pem

Open in new window

Note: The above command will create the certificate with a validity of 365 days (1 year) and store it in /etc/vsftpd/vsftpd.pem. This commands will prompt for details such as Country, State, Locality, Organization, Organizational Unit name, Common Name and Email address. Enter the details as required.
(c) After creating the certificate, edit these 3 lines in /etc/vsftpd/vsftpd.conf
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
write_enable=YES

add the following directives in /etc/vsftpd/vsftpd.conf :
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/vsftpd/vsftpd.pem

Open in new window

Note: The above directives enable SSL for local users but disable SSL for anonymous connections and force SSL for data transfers and logins. For a more detailed description on the available directives, please check the man page of 'vsftpd.conf'. (man vsftpd.conf)
Also we want the FTP, services turned on at boot time.  This is accomplished through the use of a Red Hat command as follows
chkconfig --level 2345 vsftpd on

Open in new window

Next we need to configure the /etc/hosts file:
Note: It is important to enter the machine’s own IP and name in this file so it can find itself.
# Do not remove the following line, or various programs
# that require network functionality will fail.
10.21.2.230  oraprod-c1 oraprod-b1.executive.stateofwv.gov
10.21.2.232  oraprod-c2 oraprod-c2.exexecutive.stateofwv.gov

Open in new window

Set the password for oracle, login as oracle

passwd oracle

Open in new window

Reboot and verify that the FTP and Telnet services both work.  
Perform these tests as the Oracle user.
1.      SSH in to the server to verify ssh works.
2.      Open an ftp session with the server to verify that FTP works.

Configure VNC server

VNC is a free program that allows you to use the X window desktop on another computer.  See www.vnc.com for vnc clients and more information
Optional if you want it always on
chkconfig –level 345 vncserver on

Open in new window

Personally I turn it on when I want it
Add line to /etc/sysconfig/vncservers as root
VNCSERVERS=”1:oracle”

Open in new window

Start the vnc server as oracle
vncserver :1

Open in new window

Configure vnc server
cd .vnc
mv xstartup xstartup.org
touch xstartup

Open in new window

Edit the xstartup file in u01/home/oracle/.vnc and enter this text.
#!/bin/sh
# Uncomment the following two lines for normal desktop:
# unset SESSION_MANAGER
# exec /etc/X11/xinit/xinitrc

[ -r \/u01/home/oracle/.Xresources ] && xrdb \/u01/home/oracle/.Xresources
xsetroot -solid grey
vncconfig -iconic &
xhost localhost
PATH=$PATH:/usr/local/bin
export PATH
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
twm &

Open in new window

Restart the vnc server software
The command to stop the server is:
vncserver –kill :1

Open in new window

SAMBA Client install and configure

I install the SAMBA client to be able to read NTFS drives on windows servers to more quickly load databases dump files.  It turns out it is much faster to read a large dump file over the network thus combining the decoding and loading of the data then it is to transport the file first and reference it locally for the decode and load.
Setup the mount and use YUM to install the samba network client.
mkdir /mnt/dbarchive
[root@DepOraC2 etc]# chown oracle:dba /mnt/dbarchive
yum install samba-client.x86_64

Open in new window

Also enter Oracle into the /etc/sudoers file.
oracle  ALL=(ALL)       ALL

Open in new window

I use these two commands to simply mounting a drive of the backup server named dbarchive.
Setup under the Oracle user these two alias in the bash_profile file.
alias closedrive='~oracle/dbarchive/closedrive'
alias opendrive='~oracle/dbarchive/opendrive'

Open in new window

Set the security on the files opendrive and closedrive to make them executable.
Open Drive:
if [ $# -ne 3 ]
then
   echo "************************************"
   echo "**                                **"
   echo "**   Open Drive on DepKcDbArchive **"
   echo "**                                **"
   echo "** opendrive c user pass          **"
   echo "**                                **"
   echo "** c    - the drive to use        **"
   echo "** user - the username            **"
   echo "** pass - the password to use     **"
   echo "**                                **"
   echo "************************************"
   echo ""
   echo "What drive letter use?"
   read driveletter
   echo "What Username to use?"
   read username
   echo "What Password to use?"
   read password
else
   driveletter=$1
   username=$2
   password=$3
fi
sudo mount -t cifs //10.21.2.28/$driveletter\$ -o username=$username,password=$password /mnt/dbarchive
ls /mnt/dbarchive

Open in new window

Close Drive:
sudo umount /mnt/dbarchive

Open in new window

Later in oracle SQL plus we can enable the data pump system to use this by creating the directory entry.
SQL> create directory DBARCHIVE as '/mnt/dbarchive/';

Directory created.

SQL> grant read,write on directory DBARCHIVE to DBA;

Grant succeeded.

Open in new window

Example of how to see the status of a data pump job running as system

impdp system attach=ORAPROD_LOAD

Open in new window

Install Packages missed by the Red Hat install.

In RHEL (Red Hat Enterprise Linux) YUM (Yellow dog Updater Modified) is an open-source command-line package-management utility for Linux operating systems using the RPM Package Manager.  It works via web based repositories of packages and data on dependencies.
The Oracle Universal Installer will inform you of any missing packages needed.  Use YUM to install any needed packages.  Consult oracle doc http://docs.oracle.com/cd/E11882_01/install.112/e22489/prelinux.htm#BABFDJJF for packages needed then run cluvfy to check your environment from the grid install directory.
A packed delivered with the Oracle Installs is grid/rpm/cvuqdisk-1.0.9-1.rpm install this RPM.

Setup Kernel Parameters:

http://docs.oracle.com/cd/E11882_01/install.112/e24321/pre_install.htm#CIHEGJEH

Setup NTP (Network Time Protocol) Service:

Basic NTP Install https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sect-Date_and_Time_Configuration-Command_Line_Configuration-Network_Time_Protocol.htmlhttps://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sect-Date_and_Time_Configuration-Command_Line_Configuration-Network_Time_Protocol.html plus the –x option
[oracle@DepOraC2 pump]$ grep OPTIONS /etc/sysconfig/ntpd
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -x"

Open in new window

Disk Setup for cluster ware:

1.

In storage manager attach a LUN to the oracle host group for Cluster ware overhead.

2.

In the KVM administration web site select the “Disk” tab.
Click Add – choose External and Fiber Channel and name and select the LUN. Selecting should be easy because we do one attachment from the storage manager at a time so there is no confusion.

3.

On the “Hosts” tab select the first host and then select the disk tab at the bottom.
Adjust the size of the areas in the Website by dragging so that you can see multiple lines of messages in the bottom gray area and see the disks tab below the virtual servers.  This is important so that you can take the appropriate action for the typical system responses.

4.

In the Host - disk tab then click add, then click the attach disk check box at the top and the external check box.  In the resulting list you should see one drive.  
Select the drive and click OK.  Note by default the attach disk check box is marked.
If the drive not show up in the list of drives for the Host after clicking OK then repeat the add drive but clear the check box for attaching the drive.
You can then Reboot the Host computer from its console and be able to attach the drive by opening the virtual machine tab and the drive tab below and selecting the drive and clicking attach.

5.

login via ssh or vnc
Check for the drive using fdisk –l and you should see the drive as /dev/vd… with no partitions.
If you do not see the disk in the list then reboot the virtual machine.

Installing ASM:

http://docs.oracle.com/cd/E11882_01/install.112/e24321/oraclerestart.htm#CHDHFFEJ
Note I found this install quite vexing.  You may have to play with this IE check function using the configure and createdisk.  Once a disk creates you should insure that all rpms are installed.  Although you may have to uninstall some to get the commands to work then install them again to be ready for the cluster ware install.  That is the commands need to work and all rpms installed but simply installing won’t do it.
. oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/oracle
[root@DepOraC1 ~]# /etc/init.d/oracleasm createdisk ORAPROD2 /dev/vdg1
Marking disk "CLUSTER" as an ASM disk:                    [  OK  ]

Open in new window

You will need to have a disk created for the cluster ware to install for the cluster data.  The cluster data consists of the voting data and oracle cluster registry both of which are used to manage the oracle cluster in Oracle RAC.

Verify you’re preinstall:

In the Grid installation directory use the runcluvfy shell script.
./runcluvfy.sh -pre crsinst -n DepOraC1, DepOraC2

Open in new window

Fix any errors and re run until clean.

Install Oracle Cluster ware:

Run the Installer for Cluster ware via a VNC connection.
http://docs.oracle.com/cd/E11882_01/install.112/e24321/oraclerestart.htm#BABCHCBG
You will need the password for the ASMSNP user if you do not know this password then use the following to set the password and insure the user is created in the ASM system
As the Grid Infrastructure (GI) software owner, connect to ASM1 via sqlplus (as sysasm):
$ . oraenv <---- enter +ASM1 for ORACLE_SID
$ sqlplus / as sysasm
SQL> select * from v$pwfile_users;
If ASMSNMP user is listed, then remove it:
SQL> drop user ASMSNMP;
SQL> create user asmsnmp identified by <password here>;
grant sysdba to asmsnmp;
SQL> select * from v$pwfile_users;

Connect to the other ASM instances (if more than one node in the cluser) and query v$pwfile_users to verify ASMSNMP user is listed (same as for ASM1)

Open in new window

Disk Setup Loop for databases:

Ok so now you are ready to setup the disks for databases.
Do this in a loop one LUN at a time same process as for attaching a disk for cluster ware. A final step is running ASMCA in VNC to create the disk group.  One advantage of creating all the disks in this manner is that there are occasional reboots required which can then be done in advance of the first Database creation.  In our case we moved production use to the new server before all databases were installed.  By pre-creating all the disks required we are able to add new databases without a reboot and the resulting downtime.
See appendix 1 for the loop instructions.

Install the database software & create the databases

There are some things to consider when moving from Enterprise edition to standard edition. For us the main consideration is table space designations and moving partitioned tables.  Partitioned tables cannot be imported into a standard edition database using data pump.
A Complication we did not have was to move a partitioned table with a LONG column.  Just in case you ever need to know here is the syntax to perform such a move.
SET LONG 16000000
set copycommit 1
COPY FROM HR@BOSTONDB -
TO TODD@CHICAGODB -
CREATE NEWDEPT (DEPARTMENT_ID, DEPARTMENT_NAME, CITY) -
USING SELECT * FROM EMP_DETAILS_VIEW
Therefore we are going to use SQL and database links.  Specifically, we will create a database link.  We will omit the foreign keys and retain the primary keys until the move is complete.  We add the Foreign keys after the all data is imported.
Outline of the move with consideration for partitioned tables:
1.      Create empty database using either DBCA or scripts.  
a.      If using DBCA, Create any needed data files and table spaces.  (See sidebar 1a)
2.      Create Schemas for partitioned tables and set default and temporary table space settings.
3.      Create empty tables for any partitioned tables in correct table spaces without FK’s and PK’s.
4.      Create export/import par file that does a full export/import and excludes the partitioned tables.
5.      Copy data using select into for partitioned tables.
6.      Create PK’s for partitioned tables.
7.      Export data from old database using your new par file.
8.      Import data into new database using your new par file.
9.      Create FK’s for partitioned tables.

Install RAC Database with ODBCA example

   Here is a real world example installation.  We will install a RAC database named CGIPROD on our cluster.
    First step is to verify the available resources against storage requirements and memory requirements.

Storage LUNs Available:

1.      CGI_PROD_REDO_C1 46GB in Array 200 composed of two 200 GB SSD drives with a Hot Spare.
2.      CGIPROD1 200 GB in Array 300 composed of 6 raid 10 drives 15 K RPM 300 GB each also with a Hot Spare.
3.      CGI_BLOB_PROD1 1 TB in Array 600 composed of 4 raid 10 drives 10 K RPM 600 GB each also with a Hot Spare.
  The storage LUNS meet our expected requirements and were preconfigured.  We are converting from the current ERISPROD database to the new CGIPROD database so we set the expected requirements based on the current requirements for the ERISPROD database.  We verified their existence in the storage system and the association with the HOST computers that HOST the VMs. In the Red Hat Virtualization web administration site we verified that the VMS have connections to the CGIPROD LUNS.

Memory Available on Current VM‘s.

Note we use two VM’s for all databases.  We have 208.8 GB of memory in each VM.  On the first VM named DepOraC1 we have 15 GB allocated for CGIPROD and also on DepOraC2.  This memory will be increased over time by removing memory from the ERISPROD database as usage of the databases is migrated.

Preparing to run DBCA

Next we will start the DBCA, note we have already configured ASM drives, or not we shall see.
Login as the Oracle user then start the VNC server on DepOra C1 (Linux command):  
vncserver :1

Open in new window

Connect as Root user and leave prompt ready for any root scripts needed later in the install.
Login with new session as the Oracle user on DepOra C2 and start the VNC server then connect as root.  Same Linux command:
vncserver :1

Open in new window

Connect as root use:
 su –

Open in new window

Next on windows workstation we connect via the VNC player to DepOra C1 : 0
Next we set our environment in the VNC session to CGIPROD using the Linux command: .
oraenv 

Open in new window

Set the name as CGIPROD
Set the ORACLE HOME as /u01/app/oracle/product/112
Note we are using a shared Oracle home so this is the same as all other databases installed.

Running DBCA

Next start dbca by entering dbca at the Linux prompt in VNC to get the GUI Screen.
First screen asks if RAC or RAC one node or Single instance.  We choose RAC.
Second screen we choose Create Database.
Third screen asks if want to use a template.  So we will use the Custom Database and we can establish settings as desired.  Note setting in the template consist of paths, installed components and initialization parameters for the database.
Fourth Screen asks for admin vs. policy managed server we choose admin.  Also asked are Global Database name: CGIPROD and SID PREFIX which is also CGIPROD.  The nodes are displayed so we select all for both nodes deporac1 and deporac2.
We got a message saying the default listener is not running on DepOraC1.  However we see it is running from the Grid home.  So we choose YES to Continue.  We may have to configure the listener manually but I expect that the database will register its self without much ado.
The Fifth screen labeled Step 4 of 12: Management Options.  We accept the defaults for Configure Enterprise manager and database control.  We have detailed plans for database backups so we do not select the offered database backup plan. We also leave automatic maintenance plans enabled.
The Sixth screen labeled Step 5 of 12: Database Credentials. We select same password for all accounts.
The Seventh screen labeled Step 6 of 12: Database File Locations.  We select use Oracle Managed Files on ASM Volume.  
We notice that the ASM Volumes have not been created.  So we will create them.
Marking Disks in ASM for REDO
We will use the “/etc/init.d/oracleasm createdisk ORAPROD2 /dev/vdg1” style command to create the ASM Disk.  First we look up the path which will be needed in the create disk command.  To look this up we use the Linux command: fdisk –l
So we see from the output of the fdisk command that the redo path is /dev/vdo and no partition exits.
Next we create the partition 1 on /dev/vdo with the Linux command: fdisk /dev/vdo entering p for primary, 1 for first partition and accept defaults for the start and end of the disk.  Then w to write the changes to the disk.
Next we issue our ASM command: /etc/init.d/oracleasm createdisk CGIPROD_REDO /dev/vdo1
Note the label name CGIPROD_REDO must be unique.
On the second node DepOraC2 we issue the Linux commands: /etc/init.d/oracleasm scandisks to cause the system to add the new disk and  /etc/init.d/oracleasm listdisks to verify the addition.
Then we restart the asmca in VNC on DepOraC1.
So we select ORACL:CGIPROD_REDO as the drive giving it the name CGIPROD_REDO.
Creating ASM Volumes for the example:
We start a new terminal session in VNC on deporac1 and set the environment using: . oraenv to +ASM1.
Start asmca by entering asmca at the VNC Linux prompt.  This brings up the ASM Configuration Assistant GUI Screen.  Reviewing the list of mounted drives we see no mounted drives for CGIPROD.  So choose Create.  In the Create Screen we choose External Redundancy and Show Eligible and see two of the three drives listed.  So we will configure these two and the third one for REDO will be create using the root commands.  See Marking Disks in AMS for REDO.
We select ORACL:CGIPROD1 as the first drive giving it the name CGIPROD1.
Next we repeat for the BLOB drive so we create and select ORACL:CGIPRODBLOB1 and name it CGIPRODBLOB1.
Next we exit the ASM Configuration assistant and use the existing DepOraC1 root session to create the ASM disk.
Back in the dbca screen we left open we can now choose Browse and select +CGIPROD1 for the drive we just added.
View File location variables at the bottom of the screen to verify all the settings.
View Multiplex Redo Logs and Control Files to see set the setting to +CGIPROD_REDO
Screen eight, Step 7 of 12: Recovery Configuration.  We do not use the Flash Recovery option of oracle so we clear that check box, and select Enable Archiving.
Not using Flash Recovery
We chose not to use Flash Recovery due to storage considerations.  The Flash Recovery option in oracle puts an artificial size limit on the storage of backups and archive logs.  The goal is to automate the deletion and management of files to prevent failure and make the system more automatic.
However, we have found that a person can greatly outperform these automated tools when space is tight.  When space is not tight then the tools are useful but in state government this is rarely the case.
In the Edit Archive Mode Parameters we set the archive log directory to a shared ASM file system using ACFS and mounted as /u02 with directory /u02/CGIPROD/Archive
Screen nine, Step 8 of 12: Database Content. We accept the first page but alter the entry under Standard Database Components by deselecting “Oracle Application Express”
Screen ten, Step 9 of 12: Initialization Parameters.  
Tab Memory: We set memory to 15000 MB and select Use Automatic Memory Management.
NOTE: It is Automatic Memory Management that enables the use of the Memory Target parameter useful when reducing memory to be able to upgrade the system to a new virtual server on the same hosts.
Tab Sizing:  We set the processes parameter to 500 as it is set in ERISPROD.
Tab Character set: We accept the default values.
Tab Connection Mode: We accept the default of Dedicated Server Mode.
Screen eleven, Step 10 of 12: Database Storage.  We adjust table spaces to match our test CGI database and all table spaces are created except for the BLOB table space since its storage location will be different and can be specified in the Database Control or via sqlplus.   We also set all data files for auto extend at 1 MB except for the TEMP and UNDO table spaces.  We accepted the default for the number of Redo Logs and sizes.
Screen twelve, Step 11 of 11: We select all options to create the database, scripts and template to give the max options available to recreate the database when needed.

Troubleshooting and finishing the installation

Oracle environment in Linux to be able to control the database and database control

We edit the /etc/oratab file to add the CGIPROD1 in DepOraC1 and CGIPROD2 in DepOraC2 as their entries are not set by default.  We simply copy and edit the line for CGIPROD on each server.
To get the database control status we need to set the ORALCE_UNQNAME in Linux.  This should be set using the oraenv script that we use to set the environment for working with the database instance on both servers.
So I use the Linux command:
find –name oraenv –print 2>/dev/null 

Open in new window

to find the oraenv script to update it for ORALCE_UNQNAME.
So on our system the location is
/usr/local/bin, /u01/app/grid/bin and /u01/app/oracle/product/112/bin 

Open in new window

all three needed adjusted on each system with this code.
Addition to ORAENV Scripts
echo "Setting oracle unqname for $ORACLE_SID"
if [ "$ORACLE_SID" == "ORAPRODC1" ]; then
     export ORACLE_UNQNAME=ORAPROD
     echo $ORACLE_UNQNAME $ORACLE_SID
else
  if [ "$ORACLE_SID" == "CGIPROD1" ]; then
     export ORACLE_UNQNAME=CGIPROD
     echo $ORACLE_UNQNAME $ORACLE_SID
  else
     export ORACLE_UNQNAME= $ORACLE_SID
     echo $ORACLE_UNQNAME $ORACLE_SID
  fi
fi
export ORACLE_UNQNAME

Open in new window

Listener troubles

“The listener is down: LSNRCTL for Linux: Version 11.2.0.3.0 - Production on 01-OCT-2013 14:53:38 Copyright (c) 1991, 2011, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> TNS-01150: The address of the specified listener name is incorrect NL-00303: syntax error in NV string LSNRCTL> .”
We checked the listener status using Linux command lsnrctl status with the CGIPROD1 envrionment and see that the database has handlers working the listener.
So to get more information we look in the database logs.  The alert log did not show any related error.
The Listener log (Grid home) showed no errors and several connections from the database instance CGIPROD1 on DepOraC1.
So we try a local network connection to the instance using Linux command tnsping CGIPROD1 and found the connection failed.
Next we fixed the tnsnames.ora file in the network/admin directory of the database oracle home by copying the entry automatically created in the grid home on both nodes.
The local connections now work but the original error still appears in the cluster section of database control.  So at this point it seems the error is a monitoring error rather than an actual problem.  (Note I have noticed this error since Oracle 11g was first released and have asked Oracle support about this an never received an explanation.)

Adding BLOB table space

Added the needed data files for the blob table space on the blob LUN using the Database control and selecting the BLOB LUN we had created.

APPENDIX 1:  Disk Loop

This shows the steps needed to attach a clustered ASM disk.  Repeat this set of steps for each LUN to be added to the system.
1.      In storage manager attach a LUN to the oracle host group.
2.      In the KVM administration web site select the “Disk” tab.
Click Add – choose External and Fiber Channel and name and select the LUN. Selecting should be easy because we do one attachment from the storage manager at a time so there is no confusion.
3.      On the “Hosts” tab select the first host and then select the disk tab at the bottom.
Adjust the size of the areas in the Website by dragging so that you can see multiple lines of messages in the bottom gray area and see the disks tab below the virtual servers.  This is important so that you can take the appropriate action for the typical system responses.
4.      In the Host - disk tab then click add, then click the attach disk check box at the top and the external check box.  In the resulting list you should see one drive.  
Select the drive and click OK.  Note by default the attach disk check box is marked.
If the drive not show up in the list of drives for the Host after clicking OK then repeat the add drive but clear the check box for attaching the drive.
You can then Reboot the Host computer from its console and be able to attach the drive by opening the virtual machine tab and the drive tab below and selecting the drive and clicking attach.
5.       login via ssh or vnc
Check for the drive using fdisk –l and you should see  the drive as /dev/vd… with no partitons.
If you do not see the disk in the list then reboot the virtual machine.
6.      Marking Disks in ASM
We will use the “/etc/init.d/oracleasm createdisk ORAPROD2 /dev/vdg1” style command to create the ASM Disk.  First we look up the path which will be needed in the create disk command.  To look this up we use the Linux command: fdisk –l
So we see from the output of the fdisk command that the redo path is /dev/vdo and no partition exits.
Next we create the partition 1 on /dev/vdo with the Linux command: fdisk /dev/vdo entering p for primary, 1 for first partition and accept defaults for the start and end of the disk.  Then enter w to write the changes to the disk.
Next we issue our ASM command: /etc/init.d/oracleasm createdisk CGIPROD1 /dev/vdo1
Note: The label name (after createdisk) must be unique.
On the second node DepOraC2 we issue the Linux commands: /etc/init.d/oracleasm scandisks to cause the system to add the new disk and  /etc/init.d/oracleasm listdisks to verify the addition.
7.      Creating ASM Disks
We start a new terminal session in VNC on deporac1 and set the environment using: . oraenv to +ASM1
Start asmca by entering asmca at the VNC Linux prompt.  This brings up the ASM Configuration Assistant GUI Screen, Choose Create.  In the Create Screen we choose External Redundancy and Show Eligible and see the drives listed.  
We select ORACL:CGIPROD1 as the first drive giving it the name CGIPROD1.
Next we exit the ASM Configuration assistant and use the existing DepOraC1 root session to create the ASM disk.

Appendix 2: Improvement Numbers

CPU Improvement from 8 cores to 40.
5x improvement

Memory Improvement from 32GB to 512GB.
16x improvement
Support Costs
Oracle Support Cost from $45,565/year to $5,775/year
Old IBM Server and Storage Support Cost $42,080 for 28 servers and storage for 2 servers cost is $3,500/year
New IBM Server and Storage Support Cost $2,200 for 3 years.  $733/year
Old Linux Support Cost $2,979/year
New Linux Support Cost $2,850/year
Old Total Annual Support Cost 45,565 + 3,500 + 2979 = $52,044
New Total Annual Support Cost 5,755 + 733 + 2,850 = $9,338
Cost reduction 100-100*(9338/52044) = 82% Reduction.
Backups from 3 to 4 parralel independant backups. 25% imporvement
Drive IO improvement
Old shared (28 servers) 2Gb/s fiber
New exclusive (2 servers) 6Gb/s SAS.  
Old one 4 drive Raid 10 for production
New Flash for Redo and 6 Drive Raid 10 for production
(28/2)x(6/2)x(3/2)=63x improvement

Disclaimer:

This information is provided as a courtesy.  No warranty of accuracy or correctness is expressed or implied by this document.
0
Author:NeilChak
1 Comment
LVL 39

Expert Comment

by:Geert G
this is purely a hardware approach

i don't see mention whatsoever of tuning, why not ?
0

Featured Post

Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

This video shows syntax for various backup options while discussing how the different basic backup types work.  It explains how to take full backups, incremental level 0 backups, incremental level 1 backups in both differential and cumulative mode a…
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

Keep in touch with Experts Exchange

Tech news and trends delivered to your inbox every month