Changing Primary key on a big table
Posted on 2006-07-18
Hi experts, I've been using mostly SQL Server in the last 5 years, so my Oracle skills are a little outdated. I need your advice for the following :
On a datawarehouse Oracle 9i DB, a table has been wrongly designed and I'm asked to change the primary key to avoid dupes.
Let's call it STAR12, and assume the current primary key is on
(ID_MONTH int, ID_DWH int) where ID_DWH is a sequence number that resets for each new ID_MONTH
I had to alter the table to add a new field ID_GRP_RISK
And I need to
- populate the new field (I've done this part)
- delete the records where the new field is null (done it too)
- set a not null constraint on the new field
- drop the current primary key
- delete the duplicate records
- create the new primary key on (ID_MONTH, ID_GRP_RISK)
The requirements :
I've been told that applying the changes to the historical data is a requirement. Dropping the table and running the loading script to populate a brand new table for each and every month is not an option.
My current concerns :
I was wondering if it would not be better to keep the current index and just create a new unique index on (ID_MONTH, ID_GRP_RISK), and if you experts would have better ideas or advices to handle this situation.
Also I was wondering if it would not be more effective to
- create a non-unique index first, to speed up the deleting of the dupes (records that have the same ID_MONTH, ID_GRP_RISK)
- delete the dupes
- make it the primary key afterwards (is it even possible ?)
The table has about 1 million records per ID_MONTH, currently 20 months loaded.
To avoid filling the temp tablespaces and / or swapping forever, I loop on ID_Month with a cursor to put intermediary COMMITs after each month is processed.
Any ideas/sample code welcome (more efficient syntax), in particular
I will create another topic for the "delete dupes" part if need be.
Thanks for reading