Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

Slow job execution with a package: parse numbers =ejecute numbers

Posted on 2013-05-13
2
Medium Priority
?
328 Views
Last Modified: 2013-05-17
Hi folks, I'm experiencing extremely slow in the execution of a package (see attached HISR_FUTURE.pks HISR_FUTURE.pkb)

The same done  inserts or update, in a particular table, for each pointnumber to detect, with data every five minutes, with an age of 180 days.

To perform an insert of only 2 points takes 42 seconds, on a test table without data.

The real problem arises when pointnumbers are many, say 50 pointnumbers and the table has several million records, taking over 20 hours to complete.

Do not throw any errors. The table are not partitioned.

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

Open in new window



Doing a trace of the session running the job, I see that the number of parse is = to the number of execute that takes (see line 2455 file.out file.)

INSERT INTO TESTIF(UTCTIME,POINTNUMBER)
   VALUES
   ( :past_date_found, :device_1 )

call count cpu elapsed disk query current rows
-------------------------------------------------- ---------------------
Parse 228098 2.65 2.69 0 0 0 0
Execute 228098 16.30 17.51 48 234385 1736931 228098
Fetch 0 0.00 0.00 0 0 0 0
-------------------------------------------------- ---------------------
Total 456196 18.95 20.20 48 234385 1736931 228098

Open in new window


Almost equal to the totals (see line 2727):

 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   228335      2.67       2.71          0          4          0           0
Execute 228415     16.36      17.79         50     234575    1737155      228129
Fetch      148      0.00       0.02          1        424          0          74
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   456898     19.04      20.54         51     235003    1737155      228203

Misses in library cache during parse: 19
Misses in library cache during execute: 5

   51  user  SQL statements in session.
  123  internal SQL statements in session.
  174  SQL statements in session.
   12  statements EXPLAINed in this session.

Open in new window



I replaced the way it looks for the maximum as follows:

- line 511
- 'Select' | | hisr_mngr_rec.datetimecolumnname | | '
from (SELECT * FROM '| | hisr_mngr_rec.tablename | |'
ORDER BY '| | hisr_mngr_rec.datetimecolumnname | |' DESC)
where rownum = 1 '; 

Open in new window

- line 693
'Select' | | hisr_mngr_rec.datetimecolumnname | | '
from (SELECT * FROM '| | hisr_mngr_rec.tablename | |'
ORDER BY '| | hisr_mngr_rec.datetimecolumnname | |' DESC)
where rownum = 1 and ';

Open in new window


But I got no changes.

They could give me an idea how to improve this behavior?

As I can speed up the execution of the job?

If you need more information, please apply

Thank you!
HISR-FUTURE-body.sql
HISR-FUTURE-header.sql
archive-out-tkprof.txt
tableif.sql
0
Comment
Question by:carlino70
2 Comments
 
LVL 29

Accepted Solution

by:
MikeOM_DBA earned 1500 total points
ID: 39162903
As a start:
1) Change all these OPEN/FETCH/CLOSE to single sql statements to eliminate a lot of the parsing:
      /*
      || check for device number columns
      */
      OPEN checkdevs_cur(hisr_mngr_rec.FrequencyEntryNumber);
      FETCH checkdevs_cur INTO num_sets;
      CLOSE checkdevs_cur;
      
      -- To this:
      
      SELECT count(DISTINCT DeviceSetFK)
        FROM HISRDeviceRange 
        INTO num_sets
       WHERE FrequencyEntryFK = hisr_mngr_rec.FrequencyEntryNumber;
       
      -- Or this (create function):
      
      num_sets := checkdevs_func(hisr_mngr_rec.FrequencyEntryNumber);
      

Open in new window

2) For the queries with MAX() -- lines 511 and 693 -- Create indexes on all tables on those columns:
CREATE INDEX {DateTimeColumnName}_IDX
    ON {TableName}({DateTimeColumnName} DESC)
    . . .  E t c . . .

Open in new window

   
:p
0
 

Author Comment

by:carlino70
ID: 39175199
Mike, I created the indexes on utctime column and the behavior is much better.

Also, I changed the search looking  
rownum = 1 

Open in new window


Thanks again!
0

Featured Post

Vote for the Most Valuable Expert

It’s time to recognize experts that go above and beyond with helpful solutions and engagement on site. Choose from the top experts in the Hall of Fame or on the right rail of your favorite topic page. Look for the blue “Nominate” button on their profile to vote.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Checking the Alert Log in AWS RDS Oracle can be a pain through their user interface.  I made a script to download the Alert Log, look for errors, and email me the trace files.  In this article I'll describe what I did and share my script.
Shell script to create broker configuration file using current broker Configuration, solely for purpose of backup on Linux. Script may need to be modified depending on OS-installation. Please deploy and verify the script in a test environment.
This video shows, step by step, how to configure Oracle Heterogeneous Services via the Generic Gateway Agent in order to make a connection from an Oracle session and access a remote SQL Server database table.
Via a live example, show how to restore a database from backup after a simulated disk failure using RMAN.

916 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question