?
Solved

Oralce 11g Flash Recovery and Archive logs - understanding

Posted on 2009-12-16
4
Medium Priority
?
1,208 Views
Last Modified: 2014-11-12
We are playing/testing with an oracle 11g on Amazon AWS large instance. We made the change to put archive logs on /u02/flash_recovery_area and the DB does put them there.

The fill up very, very quickly - making the database worthless.

Example, new database with 1 table (4 columns char(4),date,number,number). Nothing else happening with database. We are importing data in the table using sqlldr. The csv file is 950MB and 35 million lines.

The flash area was originally 2gb, then the process stopped, we didn't have room to rotate so we increase space to 3gb and process started again. Then same thing happened, so we increased area to 4gb. Same thing, finally we ran rman and delete archivelog all.  Which would do it, the space under flash_recovery_area/ would shrink to 34kb before eventually maxing out again.

So the questions:

1) Why would a simple data import of < 1gb, results in the archive_log filling up more then 9gb of logs?  A simple data import of 1gb, took over a day because it would pause until we cleared the space.

2) How do we insure a FULL flash_recovery area never hinders database operations (and I am not talking about assigning more locations).

3) Can we just turn off archiving? Is the only thing we lose is "flash back recovery" so if we had to restore a backup we would only have data up to the backup not beyond? Some references that archiving is needed if the database crashes (i.e. loses power)  to ensure that it can restart properly (not restoring previous changes, but for consistency when it crashed).


0
Comment
Question by:surfsideinternet
  • 2
4 Comments
 
LVL 40

Accepted Solution

by:
mrjoltcola earned 800 total points
ID: 26067698
A 4GB flash recovery area is woefully small, considering you have a table that will be > 1GB. The purpose of the flash recovery area is to hold backups, and other recovery related files. Typically, unless you are running a data warehouse with secondary storage, your FRA is where your RMAN backup will go, so the FRA should be 2-3x the size of your total datafile footprint, plus large enough for a few days worth of archive logs. I actually size mine to hold 4-5 full backups plus 30 days worth of archive logs. For a 10-20 GB database, I allocate 200GB+ for FRA.

You can turn off archiving during data loads, then reenable them after loads are done.
You need to shutdown the instance, startup mount, then "alter database noarchivelog;" from sqlplus.

If your database has only one table, or isn't going to hold data of a nature that cannot be easily reloaded, you don't need archiving at all. But otherwise, if your database contains critical data, you should not disable archiving, except as mentioned in (2) for data loads. After the load, always reenable it, and perform a full backup. Without archiving, you cannot perform hot backups, so you'll be limited to either cold backups or exports. Recovery of a database in noarchivelog requires you to recover back to the last good backup. No point in time recovery or full recovery is possible.

I consider your storage inadequate for an archiving Oracle database. I think it is simply a misunderstanding of how much storage you should allocate for your Oracle database. If 4GB is all you have for archiving / FRA, don't even consider running Oracle on the box, or plan not to run with archiving, and the risks that entails.


>>Some references that archiving is needed if the database crashes (i.e. loses power) to ensure that it can restart properly (not restoring previous changes, but for consistency when it crashed).

Oracle will not use the archive logs during crash recovery, it uses the online redo logs.

0
 

Author Comment

by:surfsideinternet
ID: 26067768
mrjoltcola: Thanks for the information. Can you explain why the 1gb of data created 9 gb of archivelogs? I thought Oracle logs changed blocks of data or is Oracle creating a log of equivalent statements (i.e. insert into table x values (a,b,c,d) for each line?

I don't understand why the 1 gb of data would expand to 9gb of logs (we don't have any indexes, so all that is changing is literally rows being added with the data) ...
0
 
LVL 3

Assisted Solution

by:logictank
logictank earned 200 total points
ID: 26067795
Most points have been covered but for the following:
 2) How do we insure a FULL flash_recovery area never hinders database operations (and I am not talking about assigning more locations). \
Set a retention policy with RMAN "configure retention policy" so files in the FRA are eligible for deletion, also set the init parameter 'DB_RECOVERY_FILE_DEST_SIZE' 5-10% under your total physical storage size so the files in FRA get cleaned up before you hit your physical storage limit.
0
 
LVL 40

Expert Comment

by:mrjoltcola
ID: 26068632
>>Can you explain why the 1gb of data created 9 gb of archivelogs?

The transaction log does include the information to rerun the transaction, so there is overhead. Part of the reason you see a 1 to 9 "ratio" is that your records are so small. There is probably a good bit of the transaction log that is fixed overhead, so with such a small record, it seems like a large ratio, but if your table had say 100 columns, you'd probably see a drastically smaller ratio.

To alleviate much of the redo / transaction log, use direct path load with sqlldr to skip much of the redo. Use "direct=true"

http://www.orafaq.com/wiki/SQL*Loader_FAQ

Also the load process will be drastically faster. I would prefer this method over disabling archiving. Sorry I did not think to mention it before.

0

Featured Post

Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Monitoring systems evolution, cloud technology benefits and cloud cost calculators business utility.
In the wake of AWS' S3 outage, we want to discuss the importance of storage and data diversification in the event of a hack, crash, or system disruption. We spoke with Experts Exchange’s COO Gene Richardson for a deeper understanding.
Via a live example, show how to take different types of Oracle backups using RMAN.
This video shows how to configure and send email from and Oracle database using both UTL_SMTP and UTL_MAIL, as well as comparing UTL_SMTP to a manual SMTP conversation with a mail server.
Suggested Courses

830 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question