I have an Oracle 10g database (we'll call PITB) that is a reporting database. A change data capture solution keeps the data in PITB up-to-date. Data is passed to PITB from an Informix database running on AIX RS6000 64-bit .
On occasion, we need to run an ETL from the Informix database to our PITB which takes 22 hours. This ETL time will increase over time as data continues to grow.
I've been trying to think of a way to speed up the ETL and have decided that it may help to turn off table logging for the specific tables being ETL'd. The thought is that if there is no redo log switching happening and no archiving occurring during the ETL, then it will finish sooner. Then table logging would be immediately turned on again after completion of the ETL.
The commands I was thinking to do are:
Turn off logging: alter table xxxxx nologging;
(run the ETL)
Turn on logging: alter table xxxxx logging;
The PITB database also captures data changes and streams them to another Oracle database. Additionally, the PITB database uses dataguard to populate a DR copy of itself.
I realize that by turning off table logging, that I would need to instantiate the downstream database as well as the DR database after the ETL completes and table logging is turned back on. Although not appealing to me, this is acceptable if I'm able to make significant time gains in the ETL process.
I'm concerned that I may not be considering all pros & cons to turning off table logging for the ETL'd tables.
Can anyone shoot some holes in this idea and save me from a foolish act?
Can anyone offer alternatives to speeding up the ETL process?
The host server for the PITB database is an AIX RS6000 64-bit.
The source database/server is an Informix db on an AIX RS6000 64-bit.
The ETL process is being handled by iWay Data Migrator via a Windows 2003 32-bit server.