xoxomos
asked on
Unable To Create Archive Log File
Almost three hundred gigabytes available at the destinations /archlog1 and /archlog2:
netapp03:/bbproddb1_archlo g1
300G 32G 269G 11% /archlog1
netapp04:/bbproddb1_rman
1.2T 549G 578G 49% /rman
netapp03:/bbproddb1_archlo g2
300G 32G 269G 11% /archlog2
but still:
Unable to create archive log file '/archlog2/BB60/1_317237_7 86457154.a rc'
ARC3: Error 19504 Creating archive log file to '/archlog2/BB60/1_317237_7 86457154.a rc'
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317234_7 86457154.a rc'
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317236_7 86457154.a rc'
ARCb: Error 19504 Creating archive log file to '/archlog1/BB60/1_317234_7 86457154.a rc'
ARC1: Error 19504 Creating archive log file to '/archlog1/BB60/1_317236_7 86457154.a rc'
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317235_7 86457154.a rc'
ARC0: Error 19504 Creating archive log file to '/archlog1/BB60/1_317235_7 86457154.a rc'
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance BB60 - Archival Error
ORA-16038: log 3 sequence# 317234 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 3 thread 1: '/redo1/BB60/redo03a.log'
ORA-00312: online log 3 thread 1: '/redo2/BB60/redo03b.log'
Sat Sep 19 05:08:20 2015
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance BB60 - Archival Error
Any ideas why?
netapp03:/bbproddb1_archlo
300G 32G 269G 11% /archlog1
netapp04:/bbproddb1_rman
1.2T 549G 578G 49% /rman
netapp03:/bbproddb1_archlo
300G 32G 269G 11% /archlog2
but still:
Unable to create archive log file '/archlog2/BB60/1_317237_7
ARC3: Error 19504 Creating archive log file to '/archlog2/BB60/1_317237_7
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317234_7
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317236_7
ARCb: Error 19504 Creating archive log file to '/archlog1/BB60/1_317234_7
ARC1: Error 19504 Creating archive log file to '/archlog1/BB60/1_317236_7
Sat Sep 19 05:08:20 2015
Unable to create archive log file '/archlog1/BB60/1_317235_7
ARC0: Error 19504 Creating archive log file to '/archlog1/BB60/1_317235_7
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance BB60 - Archival Error
ORA-16038: log 3 sequence# 317234 cannot be archived
ORA-19504: failed to create file ""
ORA-00312: online log 3 thread 1: '/redo1/BB60/redo03a.log'
ORA-00312: online log 3 thread 1: '/redo2/BB60/redo03b.log'
Sat Sep 19 05:08:20 2015
ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance BB60 - Archival Error
Any ideas why?
Those messages appear to be in the alert log. There should be at least one trace file for the archiver which should have additional information.
It also appears that you are writing to networked storage, did you loose connectivity?
It also appears that you are writing to networked storage, did you loose connectivity?
remove archived redo logs that are no longer needed
or
increase the value of the db_recovery_file_dest_size spfile paarameter.
or
increase the value of the db_recovery_file_dest_size
ASKER
Thanks. There was several hundred gigabytes available at log_archive_dest1 and log_archive_dest2. How would removing archived redo logs change the situation?
Ah yes, look for trace file. Yes right now it does appear network is somehow at least part of the problem.
Ah yes, look for trace file. Yes right now it does appear network is somehow at least part of the problem.
>>How would removing archived redo logs change the situation?
You need to "properly" remove them not just delete them from the file system.
If would help because if would tell Oracle the available space is less than db_recovery_file_dest_size .
Space on the file system doesn't matter. It is how much space Oracle thinks is available that it can use and is it below db_recovery_file_dest_size .
This keeps Oracle from being able to take over and fill up a file system. Think of it as a "quota".
You need to "properly" remove them not just delete them from the file system.
If would help because if would tell Oracle the available space is less than db_recovery_file_dest_size
Space on the file system doesn't matter. It is how much space Oracle thinks is available that it can use and is it below db_recovery_file_dest_size
This keeps Oracle from being able to take over and fill up a file system. Think of it as a "quota".
ASKER
By "properly" do you mean a 'DELETE archivelog all completed before .....?
Is there a way to tell how much Oracle 'thinks' is available ?
Is there a way to tell how much Oracle 'thinks' is available ?
>>By "properly" do you mean a 'DELETE archivelog all completed before .....?
That looks like an RMAN command. If so, yes, "properly" means using RMAN if you are using RMAN as a backup.
If you are using something else for a backup, then you need to use tools for it.
As for that specific command you posted, maybe? There are MANY ways to delete archived redo logs using RMAN. I cannot tell you the correct command to use because I have no way of knowing your disaster recovery requirements and/or backup strategy.
The command you posted might delete some files that you shouldn't. The DBA should know the backup strategy and what should and should not be able to be deleted. Check with them.
>>Is there a way to tell how much Oracle 'thinks' is available ?
Maybe but I don't know how. I've never needed to figure it out. From the error message you posted, Oracle knows there isn't enough to create one more archived redo log.
Is there any reason you just don't increase db_recovery_file_dest_size ? You claim there is plenty of disk space available. Just tell Oracle it can use more and everything will take off and start running again.
That looks like an RMAN command. If so, yes, "properly" means using RMAN if you are using RMAN as a backup.
If you are using something else for a backup, then you need to use tools for it.
As for that specific command you posted, maybe? There are MANY ways to delete archived redo logs using RMAN. I cannot tell you the correct command to use because I have no way of knowing your disaster recovery requirements and/or backup strategy.
The command you posted might delete some files that you shouldn't. The DBA should know the backup strategy and what should and should not be able to be deleted. Check with them.
>>Is there a way to tell how much Oracle 'thinks' is available ?
Maybe but I don't know how. I've never needed to figure it out. From the error message you posted, Oracle knows there isn't enough to create one more archived redo log.
Is there any reason you just don't increase db_recovery_file_dest_size
I should also STRESS the importance of archived redo logs as part of a backup strategy. If you are missing just ONE of them that is needed for recovery, you CANNOT completely recover the database. You cannot skip over one.
Therefore I consider them the most important files to have. DO NOT just delete them to free up space. You need to understand when you can delete them to your comfort level. I keep several copies of each on several different tapes.
Before you ask: No, I cannot tell you what you need to do. Every DBA and database has a different disaster recover plan/method they follow. What works for one, will likely be different from another.
Therefore I consider them the most important files to have. DO NOT just delete them to free up space. You need to understand when you can delete them to your comfort level. I keep several copies of each on several different tapes.
Before you ask: No, I cannot tell you what you need to do. Every DBA and database has a different disaster recover plan/method they follow. What works for one, will likely be different from another.
ASKER
NAME TYPE VALUE
-------------------------- ---------- ----------- -------------------------- ----
db_recovery_file_dest string /rman
db_recovery_file_dest_size big integer 900G
There's over 500G free space on /rman. The backup has a delete command that I believe takes care of archived logs
(database include current controlfile);
delete noprompt obsolete;
That is why i asked if there was a way to tell how much Oracle thinks it has available. Since you mentioned it i'm very inclined to believe one possibility MAY be something off regarding what is available and what Oracle believes is available.
--------------------------
db_recovery_file_dest string /rman
db_recovery_file_dest_size
There's over 500G free space on /rman. The backup has a delete command that I believe takes care of archived logs
(database include current controlfile);
delete noprompt obsolete;
That is why i asked if there was a way to tell how much Oracle thinks it has available. Since you mentioned it i'm very inclined to believe one possibility MAY be something off regarding what is available and what Oracle believes is available.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thanks. I'll take a look at that view you mentioned should this condition occur in the future.
The Oracle support conclusion, which i would not have dared given management on my own. :-)
"
So it seems some problem in underlying IO sub-system was preventing archive destinations to be available for archiving which cause the errors.
Note: We can see you are using VMWare platform. Oracle does not yet certify it for any oracle product since there are still server resources utilization problem between them. "
Worded much better than my just saying there is plenty room to write on so either NetApp or VMWare must be having problems finding it. :-)
The Oracle support conclusion, which i would not have dared given management on my own. :-)
"
So it seems some problem in underlying IO sub-system was preventing archive destinations to be available for archiving which cause the errors.
Note: We can see you are using VMWare platform. Oracle does not yet certify it for any oracle product since there are still server resources utilization problem between them. "
Worded much better than my just saying there is plenty room to write on so either NetApp or VMWare must be having problems finding it. :-)
ASKER