MIHIR KAR
asked on
Pl/Sql PKG to tune!
Hi Expert,
I have a package which was perform sometimes great/poor, attached file is the package .
Sorry for attachment of large set of code.
If possible please guide me with quick bad written line!
Thanks,
Mihir
pkg_to_tune.sql
I have a package which was perform sometimes great/poor, attached file is the package .
Sorry for attachment of large set of code.
If possible please guide me with quick bad written line!
Thanks,
Mihir
pkg_to_tune.sql
I assume you have permission to post that code on a public website.
It is impossible to look at code and say "this is bad" because we don't know your system.
TKPROF will give you nuts and bolts but can be difficult to go through everything.
When I debug PL/SQL I tend to use brute force to grab timings. I either use dbms_output or create a logging table and insert into it.
Then throughout the code I can insert/output times as the code executes.
For example:
create table mytimings(location varchar2(100), start_time date);
You already have the tracing for errors:
L_mark := 'Archive STG table';
I would add:
insert into mytimings values('Archive STG table',sysdate);
Then you have a history of how long each section of code takes. Then focus in on what you think are the slow pieces of code.
Since you say that sometimes it is good and bad at others, focus in on what might be causing this. Are you processing more rows at the bad times? Is there other processing during the bad times?
On a quick scan I see cursors that grab the nexval from a sequence:
SELECT ris_price_file_seq.nextval FROM dual;
The you open/fetch/close a cursor. Depending on how many times this gets executed, there may be slightly better ways. Line on the insert, just insert the NEXTVAL and use CURRVAL in the next steps. Might not need the variable and remove the constant open/close/fetch of the cursor on every loop.
It is impossible to look at code and say "this is bad" because we don't know your system.
TKPROF will give you nuts and bolts but can be difficult to go through everything.
When I debug PL/SQL I tend to use brute force to grab timings. I either use dbms_output or create a logging table and insert into it.
Then throughout the code I can insert/output times as the code executes.
For example:
create table mytimings(location varchar2(100), start_time date);
You already have the tracing for errors:
L_mark := 'Archive STG table';
I would add:
insert into mytimings values('Archive STG table',sysdate);
Then you have a history of how long each section of code takes. Then focus in on what you think are the slow pieces of code.
Since you say that sometimes it is good and bad at others, focus in on what might be causing this. Are you processing more rows at the bad times? Is there other processing during the bad times?
On a quick scan I see cursors that grab the nexval from a sequence:
SELECT ris_price_file_seq.nextval
The you open/fetch/close a cursor. Depending on how many times this gets executed, there may be slightly better ways. Line on the insert, just insert the NEXTVAL and use CURRVAL in the next steps. Might not need the variable and remove the constant open/close/fetch of the cursor on every loop.
I agree with the comments from Geert and slightwv: we can't identify what makes this package slow because we don't know:
1. Which procedures and functions in this package get called?
2. How many records are in the various tables this package references?
3. Which indexes are available on these tables?
4. Are the table and index statistics up-to-date?
Slightwv's suggestion to use a logging table would be a good start. But I also like to have the "trace" output from a run since that identifies exactly where the program spends the most time.
In addition to getting an output file from tkprof sorted by the "execute elapsed time" for each SQL statement (as Geert suggested):
tkprof tracefile what_exe.txt sort=exeela
I also like to get one sorted by "fetch elapsed time" like this, since it may be either the "fetch" time or the "execute" time that is the slowest:
tkprof tracefile what_fch.txt sort=fchela
I also like to include the schema owner when I use tkprof, like this:
tkprof tracefile what_exe.txt explain=[username]/[passwo rd] sort=exeela
If your database server is on Linux (or Unix) you can use an alias like this to quickly review the totals at the bottom of an output file from tkprof to see the total elapsed time, and also see whether the "fetch" or "execute" actions used the most time:
alias _show_tot='grep -a -A9 "OVERALL TOTALS" ${1}'
To use that (after you create this alias as the Linux/Unix owner of your Oracle database software, and navigate to your BACKGROUND_DUMP_DEST directory) enter:
_show_tot [output_file_name]
for example:
_show_tot what_exe.txt
Here is the output of that "_show_tot" command from a recent tkprof output file in one of our test systems:
1. Which procedures and functions in this package get called?
2. How many records are in the various tables this package references?
3. Which indexes are available on these tables?
4. Are the table and index statistics up-to-date?
Slightwv's suggestion to use a logging table would be a good start. But I also like to have the "trace" output from a run since that identifies exactly where the program spends the most time.
In addition to getting an output file from tkprof sorted by the "execute elapsed time" for each SQL statement (as Geert suggested):
tkprof tracefile what_exe.txt sort=exeela
I also like to get one sorted by "fetch elapsed time" like this, since it may be either the "fetch" time or the "execute" time that is the slowest:
tkprof tracefile what_fch.txt sort=fchela
I also like to include the schema owner when I use tkprof, like this:
tkprof tracefile what_exe.txt explain=[username]/[passwo
If your database server is on Linux (or Unix) you can use an alias like this to quickly review the totals at the bottom of an output file from tkprof to see the total elapsed time, and also see whether the "fetch" or "execute" actions used the most time:
alias _show_tot='grep -a -A9 "OVERALL TOTALS" ${1}'
To use that (after you create this alias as the Linux/Unix owner of your Oracle database software, and navigate to your BACKGROUND_DUMP_DEST directory) enter:
_show_tot [output_file_name]
for example:
_show_tot what_exe.txt
Here is the output of that "_show_tot" command from a recent tkprof output file in one of our test systems:
$ _show_tot ora82214.txt
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 30 0.00 0.00 0 0 0 0
Execute 127 32.39 32.97 4 41 52 114
Fetch 12 0.00 0.00 0 66 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 169 32.39 32.98 4 107 52 124
--
OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 18763 0.44 0.46 0 2 0 0
Execute 386501 15.62 19.40 1851 65648 501111 97126
Fetch 374864 12.87 18.03 6325 1073649 182 347243
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 780128 28.94 37.89 8176 1139299 501293 444369
ASKER
Thanks You so much All for the Valuable comments, I'll do follow your steps above.
This question needs an answer!
Become an EE member today
7 DAY FREE TRIALMembers can start a 7-Day Free trial then enjoy unlimited access to the platform.
View membership options
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
run a trace with binds and waits during the process
and then check via tkprof what the slowest parts are:
https://oracle-base.com/articles/misc/sql-trace-10046-trcsess-and-tkprof
i usually go for
tkprof tracefile whatever.txt sort=exeela
items like this usually require hands on acces.
tuning by mail/website is a very unrewarding process and usually doesn't lead to the best solution