Solved

Simple SQL but large execution times

Posted on 2011-02-24
15
935 Views
Last Modified: 2012-05-11
Looking for some extra help in cutting down execution times on SQL’s:

Below are my steps in DB package - they are simple but step2 and step3 have large execution times.

Step 1:
SELECT <primary_key> BULK COLLECT INTO < some_array>
FROM PARENT_TABLE
WHERE <input_parameters>
ORDER BY <input_parameter>

**The execution time is fast in milliseconds as there are indexes on where/order_by columns**

**PARENT_TABLE has nearly 1 B rows**

This is a dynamic SQL with combination of 10 input parameters to the DB package.

Step 2:
SELECT <50 parent_columns> BULK COLLECT INTO < some_array>
FROM PARENT_TABLE
WHERE <primary_key from step1>

** The execution time takes up to 35 seconds – very slow**

** I tried selecting all 50 columns in Step1 itself but it was taking a long time as it would order all the 50 columns – execution time up to 40 seconds**

I get about 3000 ID’s from step 1. What is the best way to pass 3000 ID’s in Step 2 SQL? Any optimizers that I could use?
I could only think of:
1. Gather input from Step 1 into Oracle array via bulk collect – using for loop, go through Oracle array and issue individual SQL statements and gather results.

2. Gather input from Step 1 into Oracle array via bulk collect  - cast Oracle array as table and use IN clause, like:
SELECT <50 columns> BULK COLLECT INTO <some_array>
FROM PARENT_TABLE
WHERE primary_key in
(SELECT column_value FROM THE
(SELECT CAST(oracle_array() as oracle_array) FROM DUAL))


Step 3:
SELECT <child1_columns>
FROM CHILD1_TABLE
WHERE <1200 ID’s from step2>

SELECT <child2_columns>
FROM CHILD2_TABLE
WHERE <800 ID’s from step2>

** The execution time takes up to 20 seconds – very slow**

**The where column has index**

**CHILD1_TABLE has 80 M rows and CHILD2_TABLE has 140 M rows**

From output in step 2, there is indicator on parent table that tells me how many ID’s have child records.
For example, from 3000 ID’s, I can filter them down to 1200 ID’s and 800 ID’s respectively and only pass those to step 3.
Again in Step 3, I tried to use the same approach as Step2_1 and Step2_2 – got same delayed execution times.



DBA did all the tracing, tkprof’ing etc.. We found that only I/O had big numbers.

I know the above description is little abstract but I tried to put the design approach out there. Please let me know if I need to take some different approach/design.
0
Comment
Question by:NewPrgmr
  • 7
  • 5
  • 3
15 Comments
 
LVL 7

Expert Comment

by:MrNed
ID: 34978221
I think you are better off trying to do everything in one go within Oracle. This reduces overheads from lots of separate queries and lets Oracle do its best optimization by doing everything at once. Not sure what kind of flexibility you have, but various thoughts:

1. Combine step 1 and 2 should be simple. I don't understand your comment "** I tried selecting all 50 columns in Step1 itself but it was taking a long time as it would order all the 50 columns – execution time up to 40 seconds**". What are the order by columns, maybe post an explain plan.
2. 20sec for step 3 sounds slow on any hardware. Try creating an index with every selected column + the where id column to avoid a table lookup. This would at least halve the number of IOs required.
3. At 100M+ records you are probably starting to get into territory where partitioning would help. Of course, this is an extra cost and you would need to make changes to your schema and maintenance to take advantage of it.
4. I'm guessing the ID's selected from step 1 due to the input parameters are very scattered throughout the table. Is there any way you could cluster them together via a clustered table, or even just periodically recreating it with a fixed row order. So each disk I/O grabs many rows at once instead of only one.
5. Maybe you have very bad row chaining in this table causing excessive IO.

If you have the data from your dba handy it might help to post it. Ideally, I usually look at the wait time breakdown by event, and the difference in v$sesstat before and after running each query. There are tools that can automate gathering this data for you.

What Oracle version, OS+version, HW specs?
0
 
LVL 34

Expert Comment

by:Mark Geerlings
ID: 34983649
I agree with MrNed, that trying to do this in three separate steps may force the process to be slower in Oracle than if you just do the work in one (more-complex) step.

After gathering the data in your current step 3, what needs to happen with these results?

In addition to the other details that MrNed asked for, can you also tell us:
1. How much RAM the server has?
2. What is your SGA_MAX_SIZE?
3. What is your BUFFER_CACHE_SIZE?
4. Will this query (or process) be used by only one person (or automated program) at a time?  Or, must your database support possible multiple runs of this at the same time?
0
 

Author Comment

by:NewPrgmr
ID: 34984428
@MrNed and @Markgeer - below are tkprof's / details and some explaination:

My Step 1:

SELECT TRANS_SRGT_ID
  FROM (SELECT TRANS_SRGT_ID
          FROM transaction
         WHERE src_sys_acct_nbr IN
               (SELECT column_value
                  from THE (SELECT CAST(:ACCT_ARRAY AS ACCT_LIST) FROM DUAL))
           AND RECDATE BETWEEN :RECDATE_STRT_IN AND :RECDATE_END_IN
           AND SRC_SYS_ID = 1
         Order by src_sys_trans_nbr ASC)
 WHERE ROWNUM < 2002

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.03       0.42         50         69          0        2001
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.03       0.42         50         69          0        2001

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   COUNT (STOPKEY)
      0    VIEW
      0     NESTED LOOPS (SEMI)
      0      TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
                 'TRANSACTION' (TABLE) PARTITION:ROW LOCATION
      0       INDEX   MODE: ANALYZED (FULL SCAN) OF 
                  'TRANSACTION_GLBL_UQ01' (INDEX (UNIQUE))
      0      VIEW OF 'VW_NSO_1' (VIEW)
      0       FILTER
      0        COLLECTION ITERATOR (PICKLER FETCH)
      0         FAST DUAL


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        50        0.05          0.40

Open in new window


** I get bunch of account numbers from Java - pass them as array to DB package. Use that array in the SQL. **
** I do order by and get sub-set of the result as the user wants to paginate on the result set... so for next page, when I get the request again - I get the next batch of TRANS_SRGT_ID **

My Step 2:

SELECT < 50 columns >
  FROM TRANSACTION
 WHERE TRANS_SRGT_ID IN
       ((SELECT COLUMN_VALUE
          FROM THE (SELECT CAST(:B1 AS NUMBER_LIST) FROM DUAL)))

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0         15          0           0
Fetch        1      1.27      32.42       2727       8006          0        2001
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      1.29      32.44       2727       8021          0        2001

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   NESTED LOOPS
      0    VIEW OF 'VW_NSO_1' (VIEW)
      0     HASH (UNIQUE)
      0      FILTER
      0       COLLECTION ITERATOR (PICKLER FETCH)
      0        FAST DUAL
      0    TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
               'TRANSACTION' (TABLE) PARTITION:ROW LOCATION
      0     INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 
                'TRANSACTION_GLBL_PK' (INDEX (UNIQUE))


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      2727        0.35         31.62

Open in new window


My Step 3:

SELECT *
  FROM TRANSACTION_SMA 
 WHERE TRANS_SRGT_ID IN
       ((SELECT COLUMN_VALUE
          FROM THE (SELECT CAST(:B1 AS NUMBER_LIST) FROM DUAL)))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      1.67      22.68       1789     100967          0        1711
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      1.67      22.68       1789     100967          0        1711

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   NESTED LOOPS
      0    VIEW OF 'VW_NSO_1' (VIEW)
      0     HASH (UNIQUE)
      0      COLLECTION ITERATOR (PICKLER FETCH)
      0       FAST DUAL
      0    PARTITION RANGE (ALL) PARTITION: START=1 STOP=42
      0     INDEX   MODE: ANALYZED (RANGE SCAN) OF 
                'TRANSACTION_SMA_LCL_ALL' (INDEX) PARTITION: START=1 STOP=42


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      1789        0.31         21.47

Open in new window


** In step 3 SQL, child table has about 7 columns only. So yes, I can put in all column names instead of "SELECT * FROM" .... as I need change some code, I just in "select * from" and presenting the results to you. **

======*********======*********======*********======*********======*********======*********

1. I know it sounds odd that I could not combine step 1 and step 2 - but personally, via sysdate printouts, I noticed that combining took some more time. I can get tkprof for that combination (need to change code and get hold of DBA for tkprof - takes some time)

2. DBA added index on Step 3 for selected columns + where clause - By default SQL did not hit that index, so we forced to use that index but no major improvements.

3. All tables are partitioned according to RECDATE on parent (same on child as well). On Step 2 and Step 3 - we tried to put in RECDATE, same as Step 1 - again no major gains

4. Yes you are correct, ID's from Step 1 are scattered across different partitions - hence the delay. I'm a Java guy, so not an expert on DB - but I've put forward cluster idea to DBA's

5. I passed on your comment about bad row chaining to DBA - will let you know how it goes on that.

6. @Markgeer - I tried complex process too and then simplified the process in 3 steps: Before I did Step 1 then combined Step 2 + Step 3 via Union ALL - that didn't give out any spectacular results. Can you suggest a different process? Some kind of join?

7. At the end, I combine results from Step 2 and Step 3 - pass those results to a Java web service.
In fact, multiple users will be using our service - passing various requests - I parse those request (Java web service) - hit DB package - and send back results back to them.

Oracle Version: 10.2.0.4
OS+version: Linux on System Z (Linux 2.6.16.60-0.42.4-default #1 SMP Fri Aug 14 14:33:26 UTC 2009 s390x s390x s390x GNU/Linux)
HW Specs: I don't have that info
Server RAM: 9254500k total
Sga_Max_Size: big integer 900M
buffer_cache_size is managed by sga_target in 10g, so all caches are automatically managed by sga_target ... it is 900M

Let me know if my posted data is not sufficient. DBA's have pretty much hacked around this issue for about 4 months - so I can get you any info from DBA's.
0
 
LVL 7

Expert Comment

by:MrNed
ID: 34985429
The disk wait times seem a bit excessive to me in steps 2 and 3. If you run it twice in a row for the same data is it much faster the second time or about the same?

In step 2 there are 2727 waits on single block I/O reads to retrieve columns for 2001 rows. This indicates to me that there is row chaining/migration involved (because its having to read multiple blocks per row). This might be unavoidable or maybe it can be fixed by rebuilding the table.
0
 
LVL 34

Expert Comment

by:Mark Geerlings
ID: 34985597
Is that server RAM really about 9Gb?  Is this a 32-bit O/S?

Is the SGA_MAX_SIZE really just 900m (less than 1Gb)?  That looks very small, even if this is a 32-bit system.

Is the server dedicated to running Oracle and does it have just one Oracle database instance running?
0
 

Author Comment

by:NewPrgmr
ID: 34985691
@MrNed -
1. Yes, the results are very fast if I re-run the process for same data.
DBA bounced the database (to clear cache) but the results were still fast...indicating that disk storage system also had some kind of cache on the data blocks.
2. I'll inform DBA and see if we can rebuild the table. (I can get some answers to you on Monday as its weekend)

To explain further, step 1 output contains 2001 unique TRANS_SRGT_ID (primary key and foreign key to child table). I pass these 2001 unique TRANS_SRGT_ID to step 2 to extract some details.
So I do not understand when you say "because its having to read multiple blocks per row"  - but I'll pass your comments to DBA.

@Markgeer -
I passed on your questions to DBA and pasted his response. I would assume DBA to give me correct answers :)
1. I can check if server is 32-bit O/S
2. Yes DBA indicated that SGA_MAX_SIZE was small. Since we are dealing with such huge tables - just a ball park figure - what would you suggest? about 2 GB?
3. Yes the server is dedicated to running Oracle DB but it has 4 other DB instances also running. But I remember co-ordinating a test with DBA & disk storage team - where I hit a test run, DBA traced package and disk guys traced on their system. DBA saw the same execution time and Disk guys mentioned that ours was the only process running at that time (I believe this indicates that without interference also our execution times were bad).


I totally understand that I'm dealing with huge tables that has 24 partitions - 2 years data partitioned monthly based on RECDATE. So are these expected times for step2 and step3 on Oracle DB? Our end goal is to finish all the steps under 15 seconds.
Another note is that parent table has 200 columns with 1 B rows of data. Average row length is 400 bytes. If step2 is expected - I cant understand times on step3 as it has 140 M rows and just 7 columns.
0
 
LVL 34

Expert Comment

by:Mark Geerlings
ID: 34985796
If the server is a 32-bit system, and if it really has about 9Gb of RAM and also has to run four other Oracle dataabses, then maybe an SGA_MAX_SIZE of 900Mb is the best you can do for this instance.  But, that will force lots of disk I/O when you process queries that have to read millions of rows.

By comparison, a 64-bit server with 32Gb of RAM and an SGA_MAX_SIZE of maybe 10Gb for this instance could maybe perform 10x faster.
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 

Author Comment

by:NewPrgmr
ID: 34986327
@Markgeer - thanks for the inputs on SGA_MAX_SIZE. I'll pass those numbers to DBA team. From app developer perspective, I would like some of your inputs on my code process itself.
Besides clubbing step1 and step2 - can you recommend any other steps? As I mentioned before - I did step1 then clubbed step2 and step3 (there are 4 child tables) via UNION ALL.
Is there any other fancy way?
0
 
LVL 34

Expert Comment

by:Mark Geerlings
ID: 34988278
You have to be careful of memory usage and test the results on your server when you use "bulk collect".  Yes, under some conditions using "bulk collect" can be much faster than processing individual rows in a cursor loop.  But this comes at a price: more RAM is used.  If your server has enough physical RAM (and your SGA_MAX_SIZE and/or PGA_AGGREGATE_SIZE) is large enough to handle the amount used by the "bulk collect" processes, you can see much faster performance.  But, if these "bulk collect" processes force your server to swap memory out to the swap file, your performance may be worse than with cursor loops and no "bulk collect".

Are you monitoring your server's swap utilization?  Have you tested your procedure both ways (with and without the "bulk collect")?
0
 

Author Comment

by:NewPrgmr
ID: 34989778
Your earlier comment: "64-bit server with 32Gb of RAM and an SGA_MAX_SIZE of maybe 10Gb...could maybe perform 10x faster" - this would be ideal solution for my kind of tables/processing bulk collects? I can recommend these numbers to management and see how it goes.

I was under assumption that big I/O numbers meant that fetch from disk system was taking time to go through various partitions to get me results in step2 and step3. So with cursor or without, it would be same fetch - right?

I started with cursor and then moved onto bulk collect logic. But then again that was in DEV environment (only has 1 month of data). My current issue is in QA environment - which has 9 months of data.
I can try it and see how it performs. I'll request DBA's to check on server swap utilization.
0
 
LVL 34

Accepted Solution

by:
Mark Geerlings earned 250 total points
ID: 34990095
The estimate I gave you earlier was just my opinion.  i do have 20 years of Oracle experience and that has included managing Oracle databases on both 32-bit and 64-bit systems, but I've never had the opportunity to actually test the same database and application on both of those, so that estimate is a bit of a guess.  And we don't know a lot about your database or application.  Before you make a recommendation or a purchase based on my estimate (which is based on incomplete information)  I would recommend that you try to get some advice from others also.

Yes, with either a cursor loop, or a bulk collect, Oracle will have to read all of the records that match the "where" clause.  Depending on how many records this is, on where these are in the table relative to other records in the same table (scattered or adjacent to each other, etc.), on how big the BUFFER_CACHE is, how big the SGA_MAX_SIZE is, how much RAM is available, etc, using a "bulk collect" may make this process faster or slower.  We don't have enough information to determine that for you.
0
 
LVL 7

Assisted Solution

by:MrNed
MrNed earned 250 total points
ID: 34993355
Sorry, disregard my comment on multiple blocks - I read it wrong and there's not enough information here to know for sure. Still the average disk wait times seem to be 11 to 12 ms which isn't bad but not great. What kind of disks are you using? Since it runs faster on subsequent runs, the disk I/O seems to be the biggest cause of slow down.

The solution is to reduce the number of disk reads required :) You're queries are pretty simple and not much can be done to improve them other than trying to merge them all together so the disk I/Os are only required once.

You are asking it to retrieve 2000 random values that are scattered all over the disk, so the chances of it being cached the first time it runs is fairly small, and you will rarely get multiple rows of interest in the same block. So comparing row counts, column widths, etc aren't much use in understanding the problem as the overhead in a single disk read dwarfs any other considerations. I feel that it is still far too slow for what you're doing, but I'm also used to blazingly fast disk :) Maybe if you could post the full raw trace file we can see something else. Ask the DBA to perform a level 12 trace (binds+waits).
0
 

Author Comment

by:NewPrgmr
ID: 34999443
**My subscription cancels today - it would be great if I can receive any inputs for below trace. Irrespective, I'll accept your suggestions as multiple solutions and close the thread this evening. Thank you very much for the inputs. **

I inquired, DBA has explored clustering - and they decided against it.
I'll check with DBA on server swap utilization.
DBA informed me that there is no row chaining.
You have mentioned to combine the process - but tests so far have shown that Step1 is always fast. If I club step1+step2...it is slow. And if I use UNION ALL to club step2+step3....it is very slow. Can you pass me hints on any other SQL usage? I'll try them later on.
 
Below is complete tkprof - I think this is your level 12 trace. Step3 has 2 SQL's (for 2 child tables)

TKPROF: Release 10.2.0.4.0 - Production on Wed Feb 23 09:44:11 2011

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Trace file: cth3ite_ora_11481.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing 
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

declare
  -- Local variables here
  v_acct_array ACCT_LIST := ACCT_LIST('188053953');
  v_RECDATE_STRT_IN DATE := TO_DATE('01/01/2009','MM/DD/YYYY');
  v_RECDATE_END_IN DATE := TO_DATE('07/30/2010','MM/DD/YYYY');
  v_query_type_in VARCHAR2(3) := 'H';
  --v_CUSIP_ARRAY ACCT_LIST := ACCT_LIST('784924789','086233202');
  --v_SYMBOL_ARRAY ACCT_LIST := ACCT_LIST('CTH','ECM');
  --v_CUSIP_ARRAY ACCT_LIST := ACCT_LIST();
  --v_SYMBOL_ARRAY ACCT_LIST := ACCT_LIST();
  v_SYMBOL_ARRAY ACCT_LIST := ACCT_LIST();
  v_ORDER_BY_IN NUMBER := 99;
  --v_TRANTYPE_IN ACCT_LIST := ACCT_LIST('JN','RD','CD','WI');
  v_TRANTYPE_IN ACCT_LIST := ACCT_LIST();
  v_TRANTYPE_VAR_IN VARCHAR2(3) := '';
    --v_TRANTYPE_VAR_IN VARCHAR2(3);
  v_rc CTH_USER.CTH_PC_ACCTID.ALL_CURSOR;
  v_SRCSYS_IN NUMBER := 1;
  v_ACCTSUBTYPE_IN VARCHAR2(3);-- := '1';
  v_PAGINGKEY_IN NUMBER := 0;-- := '3445669999';
  v_MAXTRANS_IN NUMBER := 2000;
  v_PAGING_FLAG VARCHAR2(5):= 'true';
  v_BOS_TRANID  VARCHAR2(11);--:= 'true';
  v_IGNRSUPPR_IN VARCHAR2(5);-- := 'true';
  v_IGNREXTRNSUPPR_IN VARCHAR2(5);
  v_BOOKSRCRDSFLAG_IN VARCHAR2(5);-- := '1';
  v_TRANTYPECODE_IN VARCHAR2(5);-- := '3';
  v_PRINCIPAL_IN VARCHAR2(15) := 'AOMSYS';
begin
  -- Test statements here
  --v_rc ALL_CURSOR := ALL_CURSOR();
  /*CTH_PC_ACCTID_BULK.CTH_SP_UNION_ACCT(acct_array => v_acct_array,
                             RECDATE_STRT_IN => v_RECDATE_STRT_IN,
                             query_type_in => v_query_type_in,
                             RECDATE_END_IN => v_RECDATE_END_IN,
                             CUSIP_ARRAY => v_CUSIP_ARRAY,
                             SYMBOL_ARRAY => v_SYMBOL_ARRAY,
                             rc => v_rc);*/
    --EXECUTE IMMEDIATE 'ALTER SESSION SET SQL_TRACE =  FALSE';
    CTH_PC_40ACCT_TRANID.CTH_SP_UNION_ACCT(acct_array => v_acct_array,
                             RECDATE_STRT_IN => v_RECDATE_STRT_IN,
                             QUERY_TYPE_IN => v_query_type_in,
                             RECDATE_END_IN => v_RECDATE_END_IN,
                             --CUSIP_ARRAY => v_CUSIP_ARRAY,
                             SYMBOL_ARRAY => v_SYMBOL_ARRAY,
                             ORDER_BY_IN => v_ORDER_BY_IN,
                             TRANTYPE_ARRAY => v_TRANTYPE_IN,
                             SRCSYS_IN => v_SRCSYS_IN,
                             ACCTSUBTYPE_IN => v_ACCTSUBTYPE_IN,
                             PAGINGKEY_IN => v_PAGINGKEY_IN,
                             MAXTRANS_IN => v_MAXTRANS_IN,
                             BOSPAGING_FLAG => v_PAGING_FLAG,
                             BOS_TRANID => v_BOS_TRANID,
                             IGNRSUPPR_IN => v_IGNRSUPPR_IN,
                             IGNREXTRNSUPPR_IN => v_IGNREXTRNSUPPR_IN,
                             BOOKSRCRDSFLAG_IN => v_BOOKSRCRDSFLAG_IN,
                             TRANTYPECODE_IN => v_TRANTYPECODE_IN,
                             PRINCIPAL_IN => v_PRINCIPAL_IN,
                             rc => v_rc);
end;

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0         24          0           0
Execute      1      0.00       0.02          0         47          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.03          0         71          0           1

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
********************************************************************************

SELECT TRANS_SRGT_ID
        FROM (SELECT TRANS_SRGT_ID
          FROM cth_admin.transaction
         WHERE src_sys_acct_nbr IN (SELECT column_value from THE(SELECT CAST(:ACCT_ARRAY AS ACCT_LIST) FROM DUAL))
           AND RECDATE BETWEEN :RECDATE_STRT_IN AND :RECDATE_END_IN  AND SRC_SYS_ID = 1 Order by src_sys_trans_nbr ASC  )  WHERE ROWNUM <   2002

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.02          0         10          0           0
Fetch        1      0.02       0.45         50         69          0        2001
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.05       0.48         50         79          0        2001

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   COUNT (STOPKEY)
      0    VIEW
      0     NESTED LOOPS (SEMI)
      0      TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
                 'TRANSACTION' (TABLE) PARTITION:ROW LOCATION
      0       INDEX   MODE: ANALYZED (FULL SCAN) OF 
                  'TRANSACTION_GLBL_UQ01' (INDEX (UNIQUE))
      0      VIEW OF 'VW_NSO_1' (VIEW)
      0       FILTER
      0        COLLECTION ITERATOR (PICKLER FETCH)
      0         FAST DUAL


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        50        0.03          0.44
********************************************************************************

SELECT RECDATE, SRC_SYS_ACCT_NBR, SRC_SYS_TRANS_NBR, ACCTSUBTYPE, NOTE1, 
  NOTE2, CUSIP, TRANTYPE, TRANSUBTYPE, MNYAMT, QTY, CHECKNUM, MAILTYPE, 
  BUYSELL, PRICE, PRINCIPALAMT, OPTOC, COMMISSION, SECFEE, XFERFEE, 
  ACCRUEDINT, SYMBOL, CANCELCORRECTCODE, ADDITIONALFEE, AVGPRICEIND, ORDERKEY,
   SESSIONID, ORDERDATE, STRFEE, NAV, CDSC, RFEE, REPID, ORIGSTLMTDATE, 
  DIVCUSIP, DIVPAYDATE, DIVIDENTTYPE, TRANS_SRGT_ID, TRANSACTION_APPLIED_IND, 
  TRANSACTION_INSURANCE_IND, TRANSACTION_SMA_IND, 
  TRANSACTION_STREET_TRADE_IND 
FROM
 TRANSACTION WHERE TRANS_SRGT_ID IN ((SELECT COLUMN_VALUE FROM THE (SELECT 
  CAST(:B3 AS NUMBER_LIST) FROM DUAL))) AND RECDATE BETWEEN :B2 AND :B1 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.01          0         15          0           0
Fetch        1      1.27      32.42       2727       8006          0        2001
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      1.29      32.44       2727       8021          0        2001

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   NESTED LOOPS
      0    VIEW OF 'VW_NSO_1' (VIEW)
      0     HASH (UNIQUE)
      0      FILTER
      0       COLLECTION ITERATOR (PICKLER FETCH)
      0        FAST DUAL
      0    TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
               'TRANSACTION' (TABLE) PARTITION:ROW LOCATION
      0     INDEX   MODE: ANALYZED (UNIQUE SCAN) OF 
                'TRANSACTION_GLBL_PK' (INDEX (UNIQUE))


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      2727        0.35         31.62
********************************************************************************

SELECT * 
FROM
 CTH_ADMIN.TRANSACTION_APPLIED WHERE TRANS_SRGT_ID IN ((SELECT COLUMN_VALUE 
  FROM THE (SELECT CAST(:B1 AS NUMBER_LIST) FROM DUAL)))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          3          0           0
Fetch        1      0.01       0.39         31         40          0          12
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.02       0.39         31         43          0          12

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
              'TRANSACTION_APPLIED' (TABLE) PARTITION:ROW LOCATION
      0    NESTED LOOPS
      0     VIEW OF 'VW_NSO_1' (VIEW)
      0      HASH (UNIQUE)
      0       COLLECTION ITERATOR (PICKLER FETCH)
      0        FAST DUAL
      0     INDEX   MODE: ANALYZED (RANGE SCAN) OF 
                'TRANSACTION_APPLIED_GLBL_PK' (INDEX (UNIQUE))


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                        31        0.02          0.38
********************************************************************************

SELECT * 
FROM
 CTH_ADMIN.TRANSACTION_SMA WHERE TRANS_SRGT_ID IN ((SELECT COLUMN_VALUE FROM 
  THE (SELECT CAST(:B1 AS NUMBER_LIST) FROM DUAL)))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      1.18      29.79       3150       6601          0        1927
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      1.19      29.80       3150       6601          0        1927

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
              'TRANSACTION_SMA' (TABLE) PARTITION:ROW LOCATION
      0    NESTED LOOPS
      0     VIEW OF 'VW_NSO_1' (VIEW)
      0      HASH (UNIQUE)
      0       COLLECTION ITERATOR (PICKLER FETCH)
      0        FAST DUAL
      0     INDEX   MODE: ANALYZED (RANGE SCAN) OF 
                'TRANSACTION_SMA_GLBL_PK' (INDEX (UNIQUE))


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      3150        0.36         29.11
********************************************************************************

SELECT * 
FROM
 CTH_ADMIN.TRANSACTION_STREET_TRADE WHERE TRANS_SRGT_ID IN ((SELECT 
  COLUMN_VALUE FROM THE (SELECT CAST(:B1 AS NUMBER_LIST) FROM DUAL)))


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      1.32      36.11       3215       8118          0        1791
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      1.33      36.12       3215       8118          0        1791

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64  (CTH_USER)   (recursive depth: 1)

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   MODE: FIRST_ROWS
      0   TABLE ACCESS   MODE: ANALYZED (BY GLOBAL INDEX ROWID) OF 
              'TRANSACTION_STREET_TRADE' (TABLE) PARTITION:ROW LOCATION
      0    NESTED LOOPS
      0     VIEW OF 'VW_NSO_1' (VIEW)
      0      HASH (UNIQUE)
      0       COLLECTION ITERATOR (PICKLER FETCH)
      0        FAST DUAL
      0     INDEX   MODE: ANALYZED (RANGE SCAN) OF 
                'TRANSACTION_STRT_TRD_GLBL_PK' (INDEX (UNIQUE))


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      3215        0.37         35.35



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0         24          0           0
Execute      1      0.00       0.02          0         47          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.01       0.03          0         71          0           1

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message from client                     1       37.74         37.74
  SQL*Net more data from client                   1        0.00          0.00
  SQL*Net message to client                       1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      588      0.07       0.06          0          0          0           0
Execute   1157      0.22       0.25          0         28          0           0
Fetch     2009      3.97      99.41       9176      27320          0        9871
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     3754      4.27      99.72       9176      27348          0        9871

Misses in library cache during parse: 42
Misses in library cache during execute: 39

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                      9176        0.37         97.03

    6  user  SQL statements in session.
 1152  internal SQL statements in session.
 1158  SQL statements in session.
    5  statements EXPLAINed in this session.
********************************************************************************
Trace file: cth3ite_ora_11481.trc
Trace file compatibility: 10.01.00
Sort options: default

       1  session in tracefile.
       6  user  SQL statements in trace file.
    1152  internal SQL statements in trace file.
    1158  SQL statements in trace file.
      43  unique SQL statements in trace file.
       5  SQL statements EXPLAINed using schema:
           CTH_USER.prof$plan_table
             Default table was used.
             Table was created.
             Table was dropped.
   35169  lines in trace file.
      99  elapsed seconds in trace file.

Open in new window

0
 

Author Comment

by:NewPrgmr
ID: 34999849
Also just found out that our HW is 64-bit...not 32 bit as we assumed it last time.
0
 

Author Closing Comment

by:NewPrgmr
ID: 35004009
Issue is still open - but had to close the thread as my subscription ends today.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Article by: Swadhin
From the Oracle SQL Reference (http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/queries006.htm) we are told that a join is a query that combines rows from two or more tables, views, or materialized views. This article provides a glimps…
Background In several of the companies I have worked for, I noticed that corporate reporting is off loaded from the production database and done mainly on a clone database which needs to be kept up to date daily by various means, be it a logical…
This video shows how to configure and send email from and Oracle database using both UTL_SMTP and UTL_MAIL, as well as comparing UTL_SMTP to a manual SMTP conversation with a mail server.
This video explains what a user managed backup is and shows how to take one, providing a couple of simple example scripts.

757 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

22 Experts available now in Live!

Get 1:1 Help Now