Want to win a PS4? Go Premium and enter to win our High-Tech Treats giveaway. Enter to Win

x
?
Solved

Considerations for commit frequency in Oracle / Pl*Sql

Posted on 2009-05-11
18
Medium Priority
?
1,245 Views
Last Modified: 2013-12-07
I have a procedure that processes anywhere from 1M to 4M records. Basically it moves it from table A (staging) to table B (final production).

The commit frequency is 100 records !

I'd like to increase this to something big, like 10,000.

The code queries all unprocessed records, then updates to "loaded" after 100 records, then queries again all unprocessed records, etc. etc.

So if I am processing 4,000,000 records, the cursor is being executed 40,000 times !

So by increasing to 10,000 it seems like I can reduce the overhead a lot.

so am I looking at this correctly? anything else to look at?
0
Comment
Question by:Alaska Cowboy
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 8
  • 7
18 Comments
 
LVL 1

Expert Comment

by:sundar62
ID: 24359611
Please check to see if the attached link answers your question.
http://forums.oracle.com/forums/thread.jspa?threadID=361756
0
 
LVL 74

Expert Comment

by:sdstuber
ID: 24360054
yes, that's the right idea, but you may also want to consider just moving the data directly.

or,  consider skipping the step completely.

why are you moving from A to B?

if A is just staging, why does it exist?  Could you load directly into B from whatever source "stages" into A?

can you post your code?  If you "must" process from A to B, and if you "must" process it in pl/sql then the previous post contains useful information about doing the bulk operations.

However, that's still sort of a second-to-worst case scenario (worst being row-by-row), if you can avoid or minimize the amount of pl/sql processing and just do the move in sql that would be best.

0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24364990
sundar62, thanks but I am making as small a change as possible to the code, which inserts one record at a time. So I may do bulk inserts in the future but for now am just considering changing the commit frequency from 100 to 10,000 and trying to gauge any impact.
0
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24365038
sdstuber,

>>why are you moving from A to B?
- it's a legacy system so can't really change that.

>>the previous post contains useful information about doing the bulk operations.
- yes, we have discussed that but that is still around the corner. For now, I just want to increase the commit frequency as a quick fix with potential fair to moderate results.

So, yes, I am in the worst case scenario but trying to improve slightly with a quick fix.

this might buy me a little time and then will move to bulk inserts.

So . . . any impact I should consider by increasing the commit frequency from 100 to 10,000
0
 
LVL 74

Expert Comment

by:sdstuber
ID: 24365411
please post the code
0
 
LVL 74

Expert Comment

by:sdstuber
ID: 24365432
decreasing the frequency of commits means you'll be consuming more rollback/undo in each transaction.   If you're doing updates you'll be locking more records and holding those locks longer.

Depending on the actions you may consume other resources like temp as well.  it's hard to say without knowing what you are doing.
0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24367180
>>decreasing the frequency of commits means you'll be consuming more rollback/undo in each transaction.
- did you mean increasing ? I want to increase the commit freq from 100 to 10,000

code posted . . .

  FOR cust_rec IN cust_curs(min_seqno, max_seqno)
  LOOP
-- save identifying data for err_tbl, if necessary;
    curr_seqno := cust_rec.cust_seqno;
    curr_srce_sys_cd := cust_rec.cust_srce_sys_cd;
    curr_cust_id_nbr := cust_rec.cust_id_nbr;
 
    srce_tot_in := srce_tot_in + 1;
 
-- insert into CDR (stage_final) cust table;
    INSERT INTO stage_final.cust
        (cust_seqno ,cust_id_nbr ,cust_chngd_dt ,cust_srce_sys_cd ,subr_cust_seqno ,subr_cust_id_nbr ,subr_cust_srce_sys_cd ,medl_rec_nbr
        ,cust_home_phon_nbr ,cust_home_fax_nbr ,cust_home_emal_addr ,cust_orig_efft_from_dt ,cust_contus_cov_reas_cd ,cust_contus_cov_efft_dt
        ,cust_prfx_nm ,cust_1st_nm ,cust_mid_nm ,cust_last_nm ,cust_sufx_nm ,cust_sex_cd ,cust_ssn ,cust_etncy_cd ,cust_brth_dt ,cust_dth_dt
        ,cust_marl_stus_cd ,cust_caus_of_dth_cd ,cust_pl_of_dth_cd ,cust_lang_cd ,cust_home_phon_ext_nbr ,cust_home_fax_ext_nbr)
      VALUES
        (cust_rec.cust_seqno ,cust_rec.cust_id_nbr ,cust_rec.cust_chngd_dt ,cust_rec.cust_srce_sys_cd ,cust_rec.subr_cust_seqno
        ,cust_rec.subr_cust_id_nbr ,cust_rec.subr_cust_srce_sys_cd ,cust_rec.medl_rec_nbr ,cust_rec.cust_home_phon_nbr
        ,cust_rec.cust_home_fax_nbr ,cust_rec.cust_home_emal_addr ,cust_rec.cust_orig_efft_from_dt ,cust_rec.cust_contus_cov_reas_cd
        ,cust_rec.cust_contus_cov_efft_dt ,cust_rec.cust_prfx_nm ,cust_rec.cust_1st_nm ,cust_rec.cust_mid_nm ,cust_rec.cust_last_nm
        ,cust_rec.cust_sufx_nm ,cust_rec.cust_sex_cd ,cust_rec.cust_ssn ,cust_rec.cust_etncy_cd ,cust_rec.cust_brth_dt
        ,cust_rec.cust_dth_dt ,cust_rec.cust_marl_stus_cd ,cust_rec.cust_caus_of_dth_cd ,cust_rec.cust_pl_of_dth_cd
        ,cust_rec.cust_lang_cd ,cust_rec.cust_home_phon_ext_nbr ,cust_rec.cust_home_fax_ext_nbr);
 
-- update record;
    UPDATE stg.cust
      SET status_cd = 'L'
      WHERE cust_seqno = cust_rec.cust_seqno;
 
    srce_tot_out := srce_tot_out + 1;
 
-- check to commit, -- xxx
    rec_cnt := rec_cnt + 1;
    IF rec_cnt >= c_commit_limit THEN <-- commit limit is 100, I want to increase to 10,000
      COMMIT;
      rec_cnt := 0;
    END IF;
 
  END LOOP;

Open in new window

0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24367234
for the loop, it runs this cursor each time . . .

cursor cust_cur
    SELECT *
      FROM stage1.cust
      WHERE cust_seqno
         BETWEEN in_min_seqno AND in_max_seqno
         AND status_cd IS NULL    <-- so each time this cursor is run, the record set is smaller as records are updated (100 at a time)
      ORDER BY memb_seqno;


0
 
LVL 74

Accepted Solution

by:
sdstuber earned 500 total points
ID: 24367433
increasing the limit decreases the frequency of commits.
so, different words, but we meant the same thing.

however, do you really need to do this in a loop?
could you just do one insert and one update with a commit at the end?
INSERT INTO stage_final.cust(
                                 cust_seqno,
                                 cust_id_nbr,
                                 cust_chngd_dt,
                                 cust_srce_sys_cd,
                                 subr_cust_seqno,
                                 subr_cust_id_nbr,
                                 subr_cust_srce_sys_cd,
                                 medl_rec_nbr,
                                 cust_home_phon_nbr,
                                 cust_home_fax_nbr,
                                 cust_home_emal_addr,
                                 cust_orig_efft_from_dt,
                                 cust_contus_cov_reas_cd,
                                 cust_contus_cov_efft_dt,
                                 cust_prfx_nm,
                                 cust_1st_nm,
                                 cust_mid_nm,
                                 cust_last_nm,
                                 cust_sufx_nm,
                                 cust_sex_cd,
                                 cust_ssn,
                                 cust_etncy_cd,
                                 cust_brth_dt,
                                 cust_dth_dt,
                                 cust_marl_stus_cd,
                                 cust_caus_of_dth_cd,
                                 cust_pl_of_dth_cd,
                                 cust_lang_cd,
                                 cust_home_phon_ext_nbr,
                                 cust_home_fax_ext_nbr
           )
    (SELECT cust_seqno,
            cust_id_nbr,
            cust_chngd_dt,
            cust_srce_sys_cd,
            subr_cust_seqno,
            subr_cust_id_nbr,
            subr_cust_srce_sys_cd,
            medl_rec_nbr,
            cust_home_phon_nbr,
            cust_home_fax_nbr,
            cust_home_emal_addr,
            cust_orig_efft_from_dt,
            cust_contus_cov_reas_cd,
            cust_contus_cov_efft_dt,
            cust_prfx_nm,
            cust_1st_nm,
            cust_mid_nm,
            cust_last_nm,
            cust_sufx_nm,
            cust_sex_cd,
            cust_ssn,
            cust_etncy_cd,
            cust_brth_dt,
            cust_dth_dt,
            cust_marl_stus_cd,
            cust_caus_of_dth_cd,
            cust_pl_of_dth_cd,
            cust_lang_cd,
            cust_home_phon_ext_nbr,
            cust_home_fax_ext_nbr
       FROM stage1.cust
      WHERE cust_seqno BETWEEN in_min_seqno AND in_max_seqno AND status_cd IS NULL);
 
UPDATE stg.cust
      SET status_cd = 'L'
      WHERE cust_seqno BETWEEN in_min_seqno AND in_max_seqno 
        AND status_cd IS NULL;
 
commit;

Open in new window

0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24367994
Yes, I see what you are saying.

that's a really good point about a massive insert. But what about trapping errors? With the individual insert, we trap errors and write error records, see snippet.

  EXCEPTION
    WHEN dup_val_on_index THEN
      UTL_FILE.PUT_LINE(out_file, 'Duplicate record in CDR mem_memb for ODS SEQNO = ' || to_char(curr_seqno));
      
      UPDATE ods.mem_memb
        SET ods_stus_cd = 'D'
        WHERE memb_seqno = curr_seqno;
 
    WHEN no_parent_key THEN
      UTL_FILE.PUT_LINE(out_file, 'NO PARENT KEY ERROR IN CDR ' || tab_nam ||' for ODS SEQNO = ' || to_char(curr_seqno));
      UTL_FILE.PUT_LINE(out_file, sqlerrm);
      
      UPDATE ods.mem_memb
        SET ods_stus_cd = 'I'
        WHERE memb_seqno = curr_seqno;
 
    WHEN old_snapshot THEN
      UTL_FILE.PUT_LINE(out_file, 'SNAPSHOT TOO OLD ERROR IN ODS ' || tab_nam ||' FOR ODS SEQNO = ' || to_char(curr_seqno));
      UTL_FILE.PUT_LINE(out_file, sqlerrm);
      ROLLBACK;
 
    WHEN others THEN
      UTL_FILE.PUT_LINE(out_file, 'ERROR IN PROCEDURE MM_COPY FOR ODS SEQNO = ' || to_char(curr_seqno));
      UTL_FILE.PUT_LINE(out_file, sqlerrm);
      
      insert_err_tbl(curr_srce_sys_cd, curr_memb_id_nbr, curr_seqno, 'MEM_MEMB', sqlcode, '109');
      UPDATE ods.mem_memb
        SET ods_stus_cd = 'E'
        WHERE memb_seqno = curr_seqno;
 
      io_err := TRUE;

Open in new window

0
 
LVL 74

Assisted Solution

by:sdstuber
sdstuber earned 500 total points
ID: 24370309
use automatic error logging to capture the errors.


first, create the error logs for each schema's cust table


EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('CUST', 'CUST_ERRORS','STAGE_FINAL');

EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('CUST', 'CUST_ERRORS','STAGE1');

then add the error logging lines to each sql


then you can either accept the errors and commit then log the exceptions
or rollback and log the exceptions or whatever you want.



INSERT INTO stage_final.cust(
                                 cust_seqno,
                                 cust_id_nbr,
                                 cust_chngd_dt,
                                 cust_srce_sys_cd,
                                 subr_cust_seqno,
                                 subr_cust_id_nbr,
                                 subr_cust_srce_sys_cd,
                                 medl_rec_nbr,
                                 cust_home_phon_nbr,
                                 cust_home_fax_nbr,
                                 cust_home_emal_addr,
                                 cust_orig_efft_from_dt,
                                 cust_contus_cov_reas_cd,
                                 cust_contus_cov_efft_dt,
                                 cust_prfx_nm,
                                 cust_1st_nm,
                                 cust_mid_nm,
                                 cust_last_nm,
                                 cust_sufx_nm,
                                 cust_sex_cd,
                                 cust_ssn,
                                 cust_etncy_cd,
                                 cust_brth_dt,
                                 cust_dth_dt,
                                 cust_marl_stus_cd,
                                 cust_caus_of_dth_cd,
                                 cust_pl_of_dth_cd,
                                 cust_lang_cd,
                                 cust_home_phon_ext_nbr,
                                 cust_home_fax_ext_nbr
           )
    (SELECT cust_seqno,
            cust_id_nbr,
            cust_chngd_dt,
            cust_srce_sys_cd,
            subr_cust_seqno,
            subr_cust_id_nbr,
            subr_cust_srce_sys_cd,
            medl_rec_nbr,
            cust_home_phon_nbr,
            cust_home_fax_nbr,
            cust_home_emal_addr,
            cust_orig_efft_from_dt,
            cust_contus_cov_reas_cd,
            cust_contus_cov_efft_dt,
            cust_prfx_nm,
            cust_1st_nm,
            cust_mid_nm,
            cust_last_nm,
            cust_sufx_nm,
            cust_sex_cd,
            cust_ssn,
            cust_etncy_cd,
            cust_brth_dt,
            cust_dth_dt,
            cust_marl_stus_cd,
            cust_caus_of_dth_cd,
            cust_pl_of_dth_cd,
            cust_lang_cd,
            cust_home_phon_ext_nbr,
            cust_home_fax_ext_nbr
       FROM stage1.cust
      WHERE cust_seqno BETWEEN in_min_seqno AND in_max_seqno AND status_cd IS NULL)
LOG ERRORS INTO stage_final.cust_errors;
 
UPDATE stg.cust
      SET status_cd = 'L'
      WHERE cust_seqno BETWEEN in_min_seqno AND in_max_seqno 
        AND status_cd IS NULL
LOG ERRORS INTO stage1.cust_errors;

Open in new window

0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24497708
ststuber

(sorry for the delay)

good idea on capturing the errors while using a massive insert statement. it's something that I will have to work out in the future.


0
 
LVL 74

Expert Comment

by:sdstuber
ID: 24497727
yes, automatic error logging is a fantastic feature that can give you a lot of versatility in how you do your data processing and exception handling
0
 
LVL 1

Author Comment

by:Alaska Cowboy
ID: 24497760
Regarding my original question about increasing the commit size from 100 to 10,000, ststuber said:

>>decreasing the frequency of commits means you'll be consuming more rollback/undo in each transaction.

I didn't make it clear about the logic going on, but the reason I want to decrease the number of commits is that this will decrease the number of queries that occur.

With 4,000,000 rows and committing every 100 records, that's 40,000 times the cursor runs ! So I was thinking the better offset would be committing every 10,000 records so the cursor only runs 400 times, and the increase in overhead would be offset by the decrease in I/O running the cursor. What do you think ?
0
 
LVL 74

Assisted Solution

by:sdstuber
sdstuber earned 500 total points
ID: 24498220
Do you mean this cursor?

FOR cust_rec IN cust_curs(min_seqno, max_seqno)
  LOOP
...
END LOOP;


adding commits inside that loop does NOT make the cursor get re-executed.

The cursor will run one time only regardless of the number of commits
0
 
LVL 74

Expert Comment

by:sdstuber
ID: 24720255
based on the author's comment "back to my original question"

24498220  answers that question, the cursor will only be queried once

I've also provided alternate solution to looping  24367433
and provided an option for potential error logging 24370309

So I suggest splitting between those three posts
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Read about achieving the basic levels of HRIS security in the workplace.
Let’s face it: one of the reasons your organization chose a SaaS solution (whether Microsoft Dynamics 365, Netsuite or SAP) is that it is subscription-based. The upkeep is done. Or so you think.
Via a live example, show how to take different types of Oracle backups using RMAN.
This video shows how to Export data from an Oracle database using the Original Export Utility.  The corresponding Import utility, which works the same way is referenced, but not demonstrated.

636 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question