Solved

Tuning Insert.

Posted on 2013-01-31
22
433 Views
Last Modified: 2013-03-15
I want to insert data in append mode to tune my trigger.
The package deletes from tableA, a trigger inserts the deleted data row by row into history table.
It is taking 2-3 hours to insert into tableA_hist history table.
0
Comment
Question by:gram77
  • 12
  • 9
22 Comments
 

Author Comment

by:gram77
Comment Utility
create or replace package body myPKG
as
begin

delete
from tableA
where idate = '21-Jan-13';

end;

create or replace
TRIGGER myTrig
AFTER UPDATE OR DELETE OR INSERT
ON tableA OLD AS OLD NEW AS NEW
FOR EACH ROW
declare
begin
   if deleting
   then
      insert /*+ append */ into tableA_hist
         (col1,
          col2,
          col3,
          col4,
          col5)
      values
         (:old.col1,
         :old.col2,
         :old.col3,
         :old.col4,
         :old.col5);
end;

ORA-04088: error during execution of trigger 'PBSDDEV.USR_OVRD_ELMNT_SPEC_AIUR'
ORA-06512: at "PBSDDEV.MRGN_OVRD_POSTLOAD", line 1336
ORA-06512: at line 9
12838. 00000 -  "cannot read/modify an object after modifying it in parallel"
0
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
I've mentioned this before, you really should post your code in the question, not in a reply to yourself.  Doing what you do hides the question by making it look like someone has already responded.
0
 

Author Comment

by:gram77
Comment Utility
Will take care
0
 
LVL 76

Expert Comment

by:arnold
Comment Utility
Depending on the requirement it might be advantageous to populate the history table with a trigger on insert/update of tableA.
Presumably the history is for archival purposes rather than a whole data set.
0
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
take off the append hint to avoid the 12838 error
0
 

Author Comment

by:gram77
Comment Utility
I added the append hint to tune the insert. How else do I tune the insert
0
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
your code is generating an error, "tuning" it isn't particularly important if it simply fails.
removing append should allow it to complete.

but, for tuning - don't do it row-by-row
 this will be much more complicated, but the insert itself is fine, the problem is doing it again and again and again

also, since the insert is part of a delete trigger, are you sure the problem is in the insert and not the delete?

what are the wait events?
0
 

Author Comment

by:gram77
Comment Utility
If I disable the trigger, the delete itself is fast. However, if I enable the trigger it is held up in the insert
0
 

Author Comment

by:gram77
Comment Utility
I have inserted into the history table before deleting. and commented out the trigger

create or replace package body myPKG
as
begin

insert into tableA
values (col1, col2)
select col1, col2
from tableA
where idate = '21-Jan-13';

delete
from tableA
where idate = '21-Jan-13';

end;

--disabled the trigger
create or replace
TRIGGER myTrig
AFTER UPDATE OR DELETE OR INSERT
ON tableA OLD AS OLD NEW AS NEW
FOR EACH ROW
declare
begin
   if deleting
   then
      insert /*+ append */ into tableA_hist
         (col1,
          col2,
          col3,
          col4,
          col5)
      values
         (:old.col1,
         :old.col2,
         :old.col3,
         :old.col4,
         :old.col5);
end;
0
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
that should be a lot faster

you can put the append into the insert now that it's not part of the delete trigger


you might want to adjust isolation level of your session if you're concerned about someone committing a transaction between the time the insert starts and the delete starts.
0
 

Author Comment

by:gram77
Comment Utility
We are using dbms_lock package.
Is there any other way to define isolation level
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
alter session set isolation_level ....


dbms_lock doesn't do that
0
 

Author Comment

by:gram77
Comment Utility
It is still taking the same time after i commented out the insert into history table in the trigger, and shifted it out in the proc.
I want to try out insert in APPEND mode, and change the history table to nologging.

alter table tableA_hist nologging;

create or replace package body myPKG
as
begin

insert /*+ APPEND */ into tableA_hist
values (col1, col2)
select col1, col2
from tableA
where idate = '21-Jan-13';

delete
from tableA
where idate = '21-Jan-13';

end;

Also, the log_mode in QA environment is NOARCHIVELOG and in production it is ARCHIVELOG.
Will this slow down the insert in the production environment?

Is there any other way i can tune the insert without choosing NOLOGGING as NOLOGGING has it's own pitfalls?
0
 
LVL 73

Expert Comment

by:sdstuber
Comment Utility
yes, archivelog will be slower but not much.

do put append back in

your insert doesn't have any complexity to lend itself to tuning the syntax.
Do you have an index on idate so the select will run effeciently?

Are their indexes or triggers on tableA_hist?  If so, those will slow down inserts
0
 

Author Comment

by:gram77
Comment Utility
The same proc in UAT, DEV and other environments is not long running, the bottleneck is only in QA.
And in UAT and in other env. where the proc finishes fast, the index was analyzed more recently then in QA - this may be one of the reasons.
There is clearly an environmental difference.

Index on all the environments are valid.

Also, don't understand how can index rebuild or index analysis tune an insert. As insert is not like select where a recent analyzed table is faster than the other one.

In my mind the only way insert can benifit is by disabling the index and enabling it later, or by no archive way. Not by rebuilding an index or reanalyzing it.
0
 

Author Comment

by:gram77
Comment Utility
I have analyzed the tables and used append hint, and also created index on large tables, still the insert takes 2 hours to complete.

The insert is trying to add 1 million rows at once. Is this an issue.

Will it help if i insert smaller chunks say 1,00,000 rows at once and then commit?
0
 
LVL 73

Assisted Solution

by:sdstuber
sdstuber earned 500 total points
Comment Utility
analyzing the index on the table your selecting from is to tune the select portion of the statement.

you are correct, that the insert portion of the statement won't be helped, in fact, the presence of indexes on the table you are inserting into will make the insert slower.

doing ten  inserts of 100K should be slower as you'll be doing the exact same io, but you'll do the overhead of sql execution 10 times, plus you'll have to execute the select search 10 times.

if it's still slow, what are the wait events?
0
 

Author Comment

by:gram77
Comment Utility
would it help to use NOLOGGING on the history table?

I see Long Ops: Sort Output taking time
0
 

Author Comment

by:gram77
Comment Utility
I can bulk collect into a pl/sql variable using a single fetch, and insert into history table 10 times each time inserting 1,00,000 rows, along with NOLOGGIN on the history table and the index it uses.
Will this help?
0
 
LVL 73

Accepted Solution

by:
sdstuber earned 500 total points
Comment Utility
nologging can make the insert faster, make sure you take a backup afterward though

using bulk operations will simply be adding steps.


Rather than trying to modify the process on QA, if it runs well everywhere else, try to find what is different.

What are the wait events?  trace and tkprof the process.

since that takes a while,use that time to double check all objects.
Something is different between your systems.  Find the differences
0
 

Author Comment

by:gram77
Comment Utility
sdstuber
"nologging can make the insert faster, make sure you take a backup afterward though"
How do i ensure a backup?
0
 
LVL 73

Assisted Solution

by:sdstuber
sdstuber earned 500 total points
Comment Utility
ask your dba,  it's not an action you can apply to the sql
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
percentage remaining 6 40
Clone Oracle 12c Database 5 24
automatic email alert 1 20
Extract the first word (before the , ) 2 19
Have you ever had to make fundamental changes to a table in Oracle, but haven't been able to get any downtime?  I'm talking things like: * Dropping columns * Shrinking allocated space * Removing chained blocks and restoring the PCTFREE * Re-or…
How to Unravel a Tricky Query Introduction If you browse through the Oracle zones or any of the other database-related zones you'll come across some complicated solutions and sometimes you'll just have to wonder how anyone came up with them.  …
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function
Via a live example, show how to restore a database from backup after a simulated disk failure using RMAN.

772 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now