I am working on partition exchange and my partition exchange is taking a lot of time.
I have an archive table HOD_A on database A, this table is list partition by column period_id and each partition as around 10-12 million records. This table is in logging mode, compressed partitions and has about 5 global indexes all the indexes are normal indexes and one of them is compressed. There are no local indexes on the table.
My actual table HOD resides on a database B and this is a non-partitioned table.
Now, I need to get all the records from HOD@B where period_id=10 into HOD_A@A table. This data will go into partition P10.
1. Created a table called TEMP_HOD_A which is same as HOD_A. This table is compressed, no logging mode and same indexes.
2. Inserted records into TEMP_HOD_A with append hint using bulk collect.
TYPE ARRAY IS TABLE OF HOD%ROWTYPE;
CURSOR C IS SELECT * FROM HOD@B where period_id=10;
FETCH C BULK COLLECT INTO HFF_DATA LIMIT 20000;
FORALL I IN 1 .. HOD_DATA.COUNT
INSERT /*+append*/ INTO TEMP_HOD_A VALUES HOD_DATA (I);
EXIT WHEN C%NOTFOUND;
3. Gather statics for temp table
DBMS_STATS.GATHER_TABLE_STATS(‘<schema>’.,‘TEMP_HOD_A’ , CASCADE => TRUE, GRANULARITY => 'ALL');
4. Exchange partition: (This step is took 27 hours for a 8,700,000 records)
ALTER TABLE HOD_A EXCHANGE PARTITION P10 WITH TABLE TEMP_HOD_A excluding INDEXES WITHOUT VALIDATION UPDATE GLOBAL INDEXES
Please help me in optimizing the process. Previously this was done using datapump exports. I am looking into automating the process.
Thanks in advance.