Link to home
Start Free TrialLog in
Avatar of hc2342uhxx3vw36x96hq
hc2342uhxx3vw36x96hqFlag for United States of America

asked on

ORA-01555 caused by SQL statement below "SELECT .... FROM ...."

Please suppose that the cursor read_data_to_copy work on OVER 2 million records.
Often, but not always, I obtain the following error:
ORA-01555 caused by SQL statement below
SELECT alpha, beta, gamma FROM source_table_1;
Please help me to optimize the procedure in order to avoid that Oracle error.
PROCEDURE replicate_table
IS
   auxiliary_variable      NUMBER;
   count_dest_table        NUMBER;
 
   CURSOR read_data_to_copy
   IS
      SELECT alpha, beta, gamma
        FROM source_table_1;
 
   read_data_to_copy_rec   read_data_to_copy%ROWTYPE;
BEGIN
   count_dest_table := 0;
 
   FOR read_data_to_copy_rec IN read_data_to_copy
   LOOP
      BEGIN
         auxiliary_variable := NULL;
 
         SELECT alpha
           INTO auxiliary_variable
           FROM source_table_2
          WHERE epsilon = read_data_to_copy_rec.alpha;
 
         INSERT INTO destination_table
              VALUES (auxiliary_variable, read_data_to_copy_rec.beta,
                      read_data_to_copy_rec.gamma);
 
         count_dest_table := count_dest_table + 1;
         COMMIT;
      EXCEPTION
         WHEN OTHERS
         THEN
            ROLLBACK;
      END;
   END LOOP;
END replicate_table;

Open in new window

Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

why not make it to something like this:
INSERT /*+APPEND*/ INTO destination_table
SELECT t2.alpha, t1.beta, t1.gamma
FROM source_table_1 t1
LEFT JOIN source_table_2 t2
  ON t2.epsilon = t1.alpha
; 
or, if the join to source table 2 would be x-to-many: 

INSERT /*+APPEND*/ INTO destination_table
SELECT (select t2.alpha from source_table_2 t2 where t2.epsilon = t1.alpha and rownum <= 1 )
  , t1.beta, t1.gamma
FROM source_table_1 t1
;

Open in new window

Avatar of hc2342uhxx3vw36x96hq

ASKER

The join to source table 2 is x to zero or one.

I also need to TRACK all x to zero-cases in a log table, but this is another story.
Your rollback segment could be too small to hold your data. Try to alter it using command

Alter rollback segment "XXX" storage (optimal NNNK);
Please suppose that I have an aggressive DBA that won't change any rollback segment :-)
Quote:

"... Please suppose that the cursor read_data_to_copy work on OVER 2 million records.
Often, but not always, I obtain the following error: ..."

this is 1 of the symptomes of rollback segment too small for the operation.
if your DBA doesn't want to change the rollback segment, then you need to limit your query to a certain number of records only
also tell your DBA that you are not asking him a favour instead you are asking him to do his job/task is resolving the rollback segment errors.

I have seen those kind of DBA's in my experience as well...some just do not have guts to touch anything in production because of lack of experience and some are just not intrested in doing things which application IT teams want etc....
I need suggestions in order to reduce the rollback segment utilization, without affecting rollback segment dimension.
well, I would then try the LEFT JOIN syntax suggested.
of course, you might be missing an index, because the FIRST reason for "rollback segment too small" is NOT the rollback being too small, but the transaction taking too much time...
roughly: the rollback segment cannot hold all the data you need for read consistency, as all the other concurrent transaction have "eaten up" the rollback segments.

in my databases, I sometimes have this after 30-60 minutes... and a query that takes that long is too long, and must be changed.
I am thinking to open many times the cursor, splitting by ROWID the source_table_1.
In example, I first process the records from 1 to 100000, then process the records from 100001 to 200000, and so on, in order to close the cursor more frequently and to reduce the space occupied in the rollback segment.  With about 4000000 records, I am thinking to open the same cursor for example 40 times, fetching 100000 records for every iteration.
Do you think it is a good idea?
Could you help me to rewrite my procedure, splitting by ROWID the source_table_1 and opening many times the same cursor with a prefixed amount of fetched rows?
the main problem is that you need to identify somehow the records to be fetched on each iteration.
in short: do you have some "key" that can help to identify which rows have already been processed?
Columns ALPHA and BETA compose a PRIMARY KEY.
But I think to proceed with ROWID, do you think it's right?
ASKER CERTIFIED SOLUTION
Avatar of Guy Hengel [angelIII / a3]
Guy Hengel [angelIII / a3]
Flag of Luxembourg image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
If alpha and beta are covered by a "two-column" index, will a where condition alpha || beta use that bicolumnar index?
>If alpha and beta are covered by a "two-column" index, will a where condition alpha || beta use that bicolumnar index?
no, unfortunately not.
Thanks ;-)