• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 637
  • Last Modified:

Oracle partition Exchnage

Hi All,
I am working on partition exchange and my partition exchange is taking a lot of time.

I have an archive table HOD_A on database A, this table is list partition by column period_id and each partition as around 10-12 million records. This table is in logging mode, compressed partitions and has about 5 global indexes all the indexes are normal indexes and one of them is compressed. There are no local indexes on the table.  
My actual table HOD resides on a database B and this is a non-partitioned table.  
Now, I need to get all the records from HOD@B where period_id=10 into HOD_A@A table. This data will go into partition P10.
My process:
1. Created a table called TEMP_HOD_A which is same as HOD_A. This table is compressed, no logging mode and same indexes.
2.      Inserted records into TEMP_HOD_A with append hint using bulk collect.
Code:
DECLARE
TYPE ARRAY IS TABLE OF HOD%ROWTYPE;
HOD_DATA ARRAY;
CURSOR C IS SELECT * FROM HOD@B  where period_id=10;
BEGIN
OPEN C;
 FETCH C BULK COLLECT INTO HFF_DATA LIMIT 20000;
  FORALL I IN 1 .. HOD_DATA.COUNT
INSERT /*+append*/ INTO  TEMP_HOD_A VALUES HOD_DATA (I);
COMMIT;
EXIT WHEN C%NOTFOUND;
END LOOP;
CLOSE C;
END;
3.      Gather statics for temp table
DBMS_STATS.GATHER_TABLE_STATS(‘<schema>’.,‘TEMP_HOD_A’ , CASCADE    => TRUE, GRANULARITY            => 'ALL');
4.      Exchange partition:  (This step is took 27 hours for a 8,700,000 records)
ALTER TABLE HOD_A EXCHANGE PARTITION P10 WITH TABLE TEMP_HOD_A excluding INDEXES WITHOUT VALIDATION UPDATE GLOBAL INDEXES
Please help me in optimizing the process.  Previously this was done using datapump exports. I am looking into automating the process.
Thanks in advance.
0
vishali_vishu
Asked:
vishali_vishu
  • 5
  • 3
  • 2
  • +1
1 Solution
 
Walter RitzelSenior Software EngineerCommented:
This link talks about exchange partition:
http://www.akadia.com/services/ora_exchange_partition.html

Hope this helps.
0
 
ajexpertCommented:
@wpcortes
I believe author is well aware of exchange partition concepts. In fact the author is using list partition instead of range as cited in your link.

It will be much better if, we experts, guide the author and address the question more appropriately
0
 
vishali_vishuAuthor Commented:
@wpcortes
Thanks for posting the links. This links is giving the basic info about the partition exchange and it doesn't go into details like performance, handling indexes and compression
0
Visualize your virtual and backup environments

Create well-organized and polished visualizations of your virtual and backup environments when planning VMware vSphere, Microsoft Hyper-V or Veeam deployments. It helps you to gain better visibility and valuable business insights.

 
mrjoltcolaCommented:
I suspect the bulk of the time spent is the rebuild of the global indexes. You may need to look at whether your 5 global indexes are all needed, and/or if you can combine some of them to make them composite indexes. Remember, if 2 indexes have the same ordering of fields, and have common leading columns, then you can combine them.

Example.

index1(col1, col2, col3)
index2(col1)
index3(col2)

index1 and index2 are redundant, and you can drop index2. index3 should be kept.
0
 
vishali_vishuAuthor Commented:
I suspected the same thing that the global indexes were taking long time and I tried this for the second time:

ALTER TABLE HOD_A EXCHANGE PARTITION P10 WITH TABLE TEMP_HOD_A  WITHOUT VALIDATION;

This is also taking same time.
0
 
mrjoltcolaCommented:
Your original statement included UPDATE INDEXES.

This latter statement omits UPDATE INDEXES, but the catch is, according to Oracle documentation, if you have unique constraints, etc. then indexes will still be validated. See here: http://download.oracle.com/docs/cd/B28359_01/server.111/b32024/part_admin.htm#i1107555

Specifically the Oracle doc note (quote):


Note:

When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation will be performed as if WITH VALIDATION were specified in order to maintain the integrity of the constraints.
To avoid the overhead of this validation activity, issue the following statement for each constraint before doing the exchange partition operation:

ALTER TABLE table_name
DISABLE CONSTRAINT constraint_name KEEP INDEX
Then, enable the constraints after the exchange.

If you specify WITHOUT VALIDATION, then you have to make sure that the data to be exchanged belongs in the partition you exchange.

So it may be that your global indexes are still validating.
0
 
vishali_vishuAuthor Commented:

I removed all the indexes on temp table, placed the database in no-archivelog mode and did an partition exchange and that was done in 1 hour 25 minutes.

When did same think by placing the database back in archive log mode, the partition exchanges is running since 4:00pm yesterday.

I am thinking that the time consumed due to archive log mode.  

0
 
mrjoltcolaCommented:
Good information.

When you placed it back in archivelog, did you recreate the indexes too? Are all of your indexes critical? Scrutinize them very closely to see if there is a possibility of dropping an index.

How large are your redo logs? You can get some improvement by creating larger redo logs.

Also consider if your log archive destination is on the same disk as data or redo, and if so, relocate it to a separate disk.
0
 
ajexpertCommented:
I am not too sure, but can you try to replace temp table with Global Temporary Table and see if you can get further improvement on performance?
0
 
vishali_vishuAuthor Commented:
@mrjoltcola

I did not recreate the indexes when I placed the database back to archive log mode.  All the global indexes on the actual table are critical.
All the logs are created in separate drive on the same disk (Data files are locates in D drive and archive log destination is in E drive).

redo log file size:
select l.bytes,g.member from v$log l,v$logfile g
where l.group# = g.group#

There are 3 files of size 52428800

Additional Info:

The actual table has about 17 partitions each partition has an average of 10-12 million records per partition and the size of the table is 30GB. This table is used as read only table and once in a year 4 partitions are added to this table.

Previously data was added into these tables using data pump exports.
0
 
vishali_vishuAuthor Commented:
Disabling all constraints keeping the index before the partition exchange worked. It was done in 1:30 hr for 10 million rows.
0

Featured Post

Concerto's Cloud Advisory Services

Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.

  • 5
  • 3
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now