Link to home
Start Free TrialLog in
Avatar of surfsideinternet
surfsideinternet

asked on

Oralce 11g Flash Recovery and Archive logs - understanding

We are playing/testing with an oracle 11g on Amazon AWS large instance. We made the change to put archive logs on /u02/flash_recovery_area and the DB does put them there.

The fill up very, very quickly - making the database worthless.

Example, new database with 1 table (4 columns char(4),date,number,number). Nothing else happening with database. We are importing data in the table using sqlldr. The csv file is 950MB and 35 million lines.

The flash area was originally 2gb, then the process stopped, we didn't have room to rotate so we increase space to 3gb and process started again. Then same thing happened, so we increased area to 4gb. Same thing, finally we ran rman and delete archivelog all.  Which would do it, the space under flash_recovery_area/ would shrink to 34kb before eventually maxing out again.

So the questions:

1) Why would a simple data import of < 1gb, results in the archive_log filling up more then 9gb of logs?  A simple data import of 1gb, took over a day because it would pause until we cleared the space.

2) How do we insure a FULL flash_recovery area never hinders database operations (and I am not talking about assigning more locations).

3) Can we just turn off archiving? Is the only thing we lose is "flash back recovery" so if we had to restore a backup we would only have data up to the backup not beyond? Some references that archiving is needed if the database crashes (i.e. loses power)  to ensure that it can restart properly (not restoring previous changes, but for consistency when it crashed).


ASKER CERTIFIED SOLUTION
Avatar of mrjoltcola
mrjoltcola
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of surfsideinternet
surfsideinternet

ASKER

mrjoltcola: Thanks for the information. Can you explain why the 1gb of data created 9 gb of archivelogs? I thought Oracle logs changed blocks of data or is Oracle creating a log of equivalent statements (i.e. insert into table x values (a,b,c,d) for each line?

I don't understand why the 1 gb of data would expand to 9gb of logs (we don't have any indexes, so all that is changing is literally rows being added with the data) ...
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
>>Can you explain why the 1gb of data created 9 gb of archivelogs?

The transaction log does include the information to rerun the transaction, so there is overhead. Part of the reason you see a 1 to 9 "ratio" is that your records are so small. There is probably a good bit of the transaction log that is fixed overhead, so with such a small record, it seems like a large ratio, but if your table had say 100 columns, you'd probably see a drastically smaller ratio.

To alleviate much of the redo / transaction log, use direct path load with sqlldr to skip much of the redo. Use "direct=true"

http://www.orafaq.com/wiki/SQL*Loader_FAQ

Also the load process will be drastically faster. I would prefer this method over disabling archiving. Sorry I did not think to mention it before.