Dynamic Version Controlling of Oracle Metadata using shell script

Swadhin Ray
CERTIFIED EXPERT
Published:
Edited by: Andrew Leniart
In this article, we will see how we can automate the process of moving Oracle metadata to version control system, in my case I am using GIT using a simple shell script to avoid manual intervention.

The general process of code movement to Higher environments


During any development process for any organization, the critical part is to move the objects from the development environment to higher environments using manually extracting the metadata and pushing it to version controlling system from which the other automated tools used to capture the source and place or commit on higher environments.


For any organization, it is very important that the developers should commit all the codes if changed into a version control system so that there will be no point where we lose the codes.


The coding movement from lower to higher environments is performed by using tools like GIT/BitBucket or other version controlling tools.  


As of today, there is no automated process available to perform end to end solution without any manual intervention for the Oracle database. 


What are the risks or Issues and why we see rework


There is always a high risk if the developer forgets to check-in the code into a version control system. If we take an example of one developer developed an Oracle function and did not checked into GIT/BitBucket (Using this term as my organization is using GIT/Bitbucket for version controlling).


Now another developer comes and makes changes to the same objects and moved the code to GIT. So the code developed by the first developer is lost and there is no way to retrieve and we don’t have any track what changes were done by the first developer as the code is now overwritten.


If the developer missed to commit and now we are in the release cycle where we need to move the codes to the production instance.  In such a scenario, the objects which were supposed to be moved to production are missed or even if pick the latest one from version control system still we will have the old code because the new changes are not been checked-in into version controlling system.


The developers develop the objects and manually commits them on the GIT and prepare for the release, in any case, if the objects are missed then build preparation and release process gets impacted. Multiple teams across the globe need to make sure they commit everything they developed and pass the list of the object that needs to be a part of the release process. There is a potential risk of the object getting missed during the release process because the object might not be moved to version control. 


The Relationship of Governance, GIT/ BitBucket, and Oracle Database Metadata


The above picture shows an overview of how we can do governance around code commit using a version controlling system. 


Example of how the current code check-in process is done in most of the organizations


  1. The developers extract the metadata from the Oracle database and save them into.SQL files on their system. 
  2. Taking the files and placing them into the local GIT folder where they already mapped the source location. 
  3. Once the codes are checked-in into version control,  the authorized person picks them and move them to high environments as a part of the release process.   
  4. This process required all the developers to install the GIT tool locally so that they can extract the data into.SQL file and place it into the same location. This also required to map the right location of the version control so that the reflections would be properly displayed on the web for further processing.


An automated way to commit codes on SVN

 

  1. The design is to make use of existing technologies and making sure to avoid any miss or slippage of metadata of objects missed during the release process.
  2. The utility will be placed on a different server or same or as per you define where it connects to the database and extract all the metadata like tables, procedures, functions, packages, synonyms and write it into <<object name.sql>> and place it on GIT under each object folders. Now if any object folders are not present and recently created the job dynamically create the folders and place the files to the folder on the ETL server, and finally all the changes will be committed on GIT which will be reflected on BITBUCKET.
  3. There is a configuration to limit the extraction process, where the options are to only define the list of the schema that need to be extracted and not all the system objects which are very specific to the Product/ project / specific to business use-case. This will be maintained under objects.sql file under the SQL folder.
  4. This utility can be scheduled using any automated job control system like Autosys or even using Cron jobs or any scheduling tools available within the organization.

Once you find the Server where you want to place all the.SQL file and shell script in one folder or you can change the paths on the shell script. 

The configuration is to pass the schema name where you can add or remove for which you need to under the object.sql file under the SQL folder shared here.


  object_name
FROM
    all_objects
WHERE
    owner IN (
        'SCOTT'    ) -- place to add or remove the schema
    AND object_type IN (
        'TABLE',
        'VIEW',

All the SQL files that are used to extract the metadata are inside the SQL folder, unzip and validate the schema name under the object.sql file. 


sqls.zip


Below is the shell script (Give any name to the below script)that will be our main script that can be scheduled using the scheduler


#!/bin/bash
#---------------------------------------------------
# Export your Oracle details as below
#---------------------------------------------------

export ORACLE_SID=TAILND09
export ORACLE_HOME=/apps/oracle/client/<<>>/home1  # your oracle home path
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export ORACLE_PATH=$ORACLE_HOME/bin
export PATH=$ORACLE_HOME/bin:$PATH
export PATH=$PATH:/app/git/bin # your SVN home path where GIT need to be installed

#Change the below paths and files appropriately
SCRIPT_HOME=/home/Automation        # your script home path where you placed your files 
SQL_DIR=$SCRIPT_HOME/sqls
OBJECT_SQL=$SQL_DIR/objects.sql
FILE_DIR=/app/abc/db-schemas/myinstance   # your server path where you want to export the metadata for SVN commit

# change the username from scott to a privilaged schema where you can extract the data from all schemas any DBA account will work.
createObjectDdlFile(){
	filetext=""
	SQL_FN=$SQL_DIR/$1.sql
	if [ "$object_type" = "$1" ] && [ -f $SQL_FN ] && [ -s $SQL_FN ]
        then
                FILE_OT_DIR=$FILE_OWNER_DIR/${object_type}S
                FILE_NAME=$object_name.sql
                output1=`sqlplus -S scott@${ORACLE_SID}/tiger<<EOFILE  
                        @$SQL_FN $object_name $owner
EOFILE`

                while read extract_ddl
                do
                        filetext="$filetext"$'\n'"$extract_ddl"

                done <<< "$output1"
                echo "DDLs for $object_name is extracted to "$FILE_OT_DIR"/"$FILE_NAME" file"
				mkdir -p $FILE_OT_DIR
                echo "$filetext" > "$FILE_OT_DIR"/"$FILE_NAME"
                filetext=""
		else
			echo "$SQL_FN file does not exist or is empty."
        fi
}

#The git commands that will push the files to the mapping GIT 
gitCommitAllChanges() {
	DATE=`date '+%Y-%m-%d %H:%M:%S'`
	cd $FILE_DIR
        git pull origin HEAD
	git add -A .
	git commit -m "Auto Commiting DDL Extraction changes using script at $DATE"
	git push origin HEAD
}


#The Script Execution logic starts from here
output=`sqlplus -S scott@${ORACLE_SID}/tiger<<EOF
        @$OBJECT_SQL
EOF`

rec=1
while read owner object_type object_name
do
        echo "processing line $rec - $owner $object_type $object_name"
        FILE_OWNER_DIR=$FILE_DIR/$owner
        createObjectDdlFile "$object_type"

  let rec=rec+1
done <<< "$output"

gitCommitAllChanges
Advantages of implementing this process

The main advantage of this process is to avoid any manual process of extracting the metadata and committing. The agent will help capture the metadata from the Oracle database, based on the frequency of the job will place the data into a version control system. Where we will have the ability to governance on what/who/which objects are been modified without any manual intervention on SVN. 


Thank you for reading this article please feel free to leave me some feedback or to suggest any future topics. I'll be looking forward to hearing from you – Swadhin Ray  (Sloba)


For more information about me, please check out my Experts Exchange Profile page.

0
1,279 Views
Swadhin Ray
CERTIFIED EXPERT

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.