Solved

Slow job execution with a package: parse numbers =ejecute numbers

Posted on 2013-05-13
2
320 Views
Last Modified: 2013-05-17
Hi folks, I'm experiencing extremely slow in the execution of a package (see attached HISR_FUTURE.pks HISR_FUTURE.pkb)

The same done  inserts or update, in a particular table, for each pointnumber to detect, with data every five minutes, with an age of 180 days.

To perform an insert of only 2 points takes 42 seconds, on a test table without data.

The real problem arises when pointnumbers are many, say 50 pointnumbers and the table has several million records, taking over 20 hours to complete.

Do not throw any errors. The table are not partitioned.

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production

Open in new window



Doing a trace of the session running the job, I see that the number of parse is = to the number of execute that takes (see line 2455 file.out file.)

INSERT INTO TESTIF(UTCTIME,POINTNUMBER)
   VALUES
   ( :past_date_found, :device_1 )

call count cpu elapsed disk query current rows
-------------------------------------------------- ---------------------
Parse 228098 2.65 2.69 0 0 0 0
Execute 228098 16.30 17.51 48 234385 1736931 228098
Fetch 0 0.00 0.00 0 0 0 0
-------------------------------------------------- ---------------------
Total 456196 18.95 20.20 48 234385 1736931 228098

Open in new window


Almost equal to the totals (see line 2727):

 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   228335      2.67       2.71          0          4          0           0
Execute 228415     16.36      17.79         50     234575    1737155      228129
Fetch      148      0.00       0.02          1        424          0          74
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   456898     19.04      20.54         51     235003    1737155      228203

Misses in library cache during parse: 19
Misses in library cache during execute: 5

   51  user  SQL statements in session.
  123  internal SQL statements in session.
  174  SQL statements in session.
   12  statements EXPLAINed in this session.

Open in new window



I replaced the way it looks for the maximum as follows:

- line 511
- 'Select' | | hisr_mngr_rec.datetimecolumnname | | '
from (SELECT * FROM '| | hisr_mngr_rec.tablename | |'
ORDER BY '| | hisr_mngr_rec.datetimecolumnname | |' DESC)
where rownum = 1 '; 

Open in new window

- line 693
'Select' | | hisr_mngr_rec.datetimecolumnname | | '
from (SELECT * FROM '| | hisr_mngr_rec.tablename | |'
ORDER BY '| | hisr_mngr_rec.datetimecolumnname | |' DESC)
where rownum = 1 and ';

Open in new window


But I got no changes.

They could give me an idea how to improve this behavior?

As I can speed up the execution of the job?

If you need more information, please apply

Thank you!
HISR-FUTURE-body.sql
HISR-FUTURE-header.sql
archive-out-tkprof.txt
tableif.sql
0
Comment
Question by:carlino70
2 Comments
 
LVL 29

Accepted Solution

by:
MikeOM_DBA earned 500 total points
ID: 39162903
As a start:
1) Change all these OPEN/FETCH/CLOSE to single sql statements to eliminate a lot of the parsing:
      /*
      || check for device number columns
      */
      OPEN checkdevs_cur(hisr_mngr_rec.FrequencyEntryNumber);
      FETCH checkdevs_cur INTO num_sets;
      CLOSE checkdevs_cur;
      
      -- To this:
      
      SELECT count(DISTINCT DeviceSetFK)
        FROM HISRDeviceRange 
        INTO num_sets
       WHERE FrequencyEntryFK = hisr_mngr_rec.FrequencyEntryNumber;
       
      -- Or this (create function):
      
      num_sets := checkdevs_func(hisr_mngr_rec.FrequencyEntryNumber);
      

Open in new window

2) For the queries with MAX() -- lines 511 and 693 -- Create indexes on all tables on those columns:
CREATE INDEX {DateTimeColumnName}_IDX
    ON {TableName}({DateTimeColumnName} DESC)
    . . .  E t c . . .

Open in new window

   
:p
0
 

Author Comment

by:carlino70
ID: 39175199
Mike, I created the indexes on utctime column and the behavior is much better.

Also, I changed the search looking  
rownum = 1 

Open in new window


Thanks again!
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Getting a return value from an IN OUT parameter in Oracle? 7 54
PL/SQL Search for multiple strings 5 58
SQL Query 34 99
Help on model clause 5 32
How to Create User-Defined Aggregates in Oracle Before we begin creating these things, what are user-defined aggregates?  They are a feature introduced in Oracle 9i that allows a developer to create his or her own functions like "SUM", "AVG", and…
Have you ever had to make fundamental changes to a table in Oracle, but haven't been able to get any downtime?  I'm talking things like: * Dropping columns * Shrinking allocated space * Removing chained blocks and restoring the PCTFREE * Re-or…
This video shows how to Export data from an Oracle database using the Original Export Utility.  The corresponding Import utility, which works the same way is referenced, but not demonstrated.
This video shows how to copy an entire tablespace from one database to another database using Transportable Tablespace functionality.

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question