Inserting records into table makes UNDOTBS1 very big !

Hello Experts !

I've got two tables in Oracle 10.1g

TableA
16,594,824 records

TableB
0 records, same structure as TableA

I don't have too much free space in my disk, so I need to get tablespace UNDOTBS1 as small as possible.

When I try to insert the 16 millions rows of TableA into TableB like:

INSERT INTO TableB (select * from TableA);

then UNDOTBS1.DBF gets huge, like 7GB.

I set up UNDO_RETENTION in this way:

ALTER SYSTEM SET UNDO_RETENTION = 5;

I thought that doing that, UNDOTBS1.DBF won't get big.

My questions are:

1. Is the size of UNDOTBS1.DBF affected because setting up "ALTER SYSTEM SET UNDO_RETENTION = 5" or setting up "ALTER SYSTEM SET UNDO_RETENTION = 900";

2. Is there any way that UNDOTBS1.DBF and UNDOTBS1 don't grow so much? I know how to shrink UNDOTBS1.DBF after all transactions are commit, but don't know how to prevent it!
LVL 1
miyahiraAsked:
Who is Participating?
 
schwertnerConnect With a Mentor Commented:
Setting
ALTER SYSTEM SET UNDO_RETENTION = 5;
you will keep the undo data only 5 minutes,
so you have to expect that in 5 minutes
the UNDO tablespace will shrink.
0
 
miyahiraAuthor Commented:
> so you have to expect that in 5 minutes the UNDO tablespace will shrink.

It didn't. On the contrary, it got bigger up to 7GB until "INSERT INTO TableB (select * from TableA);" finished
0
 
schwertnerCommented:
Yes, because this happens in that 5 minutes frame.
Only after 5 minutes it will shrink.
It waits for COMMIT or ROLLBACK statement
to decide what to do.
Write ROLLBACK; and it will shrink.
0
The new generation of project management tools

With monday.com’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

 
schwertnerCommented:
So I think it will count the time after COMMIT statement.
Before COMMIT it is possible to get ROLLBACK and so
no entries could be emptied.
0
 
miyahiraAuthor Commented:
Ok, I got it!

Would you mind suggesting me how to make COMMIT when running this command:

INSERT INTO TableB (select * from TableA);

Should it be using CURSOR in a stored procedure? I wouldn't like to load in memory 16 millions records.

Many thanks.
0
 
miyahiraAuthor Commented:
I mean: how to make COMMIT every 5 minutes when running this command:
INSERT INTO TableB (select * from TableA);
0
 
dbmullenConnect With a Mentor Commented:
few options:
  1)  use oracle copy command and
            change the copycommit and arraysize until you don't have UNDO issues
             see snippet

  2)  why are you inserting data twice?
            drop table tableb;
            create or replace view tableb as select * from tablea;


set arraysize 100
set copycommit 50
truncate table  TableB ;
COPY FROM username/password@source_db TO username/password@target_db insert  TableB  USING SELECT * FROM  TableA;

Open in new window

0
 
schwertnerCommented:
SELECT statement will load in memory all record, but not all of them
will be there. Do not worry about this.
The suggestion of Dbmullen is OK!
You can also try to use stored procedure and cursor inside.
This has some advantages like possibilities to edit the records
on the fly. Also very good control on the flow.
0
All Courses

From novice to tech pro — start learning today.