Solved

Delete lines older than a specific date from a Unix file.

Posted on 2013-11-04
14
735 Views
Last Modified: 2013-12-02
The alert log file gets appended all the time. I want to do a automatic maintainence of the alert log file by going through the log and deleting lines OLDER than 30 or 60 or 90 days and keeping the same name. I do not want to pipe it to a new name.
0
Comment
Question by:KamalAgnihotri
  • 3
  • 3
  • 3
  • +2
14 Comments
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39621767
You probably 'can'  come up with a script to do this but why not just move it or delete it ever 30, 60 or 90 days?

Oracle will create a new one automatically if it doesn't exist.
0
 
LVL 34

Expert Comment

by:johnsone
ID: 39621788
We used to do it with a cron job.  Every week a cron job would run and rename the current alert log with a date stamp.  Then we would go through and remove the older ones manually, but you could make that part of the cron job as well if you wanted.  We just wanted to be sure that if there was an issue we kept the logs until the issue was resolved and you couldn't really script around that.
0
 

Author Comment

by:KamalAgnihotri
ID: 39621797
You are correct. But when you have over 100 databases, you want to automate as much as possible. This is also a generic question. When a log file is continously appended and keeps growing and ultimately causes the diskspace hit the % full threshold.

What is the script/command that can be put in a cron job which can be run daily and will delete the lines older than 30 days. The good thing is that every line in the log file begins with a time stamp like this:

6/25/13 07:04:11 PM EDT [INFO] [EPAgent] Using Introscope installation at: /oracle/epagent9/.
6/25/13 07:04:11 PM EDT [INFO] [EPAgent] CA Wily Introscope(R) Version 9.1
6/25/13 07:04:11 PM EDT [INFO] [EPAgent] Copyright (c) 2012 CA. All Rights Reserved.
6/25/13 07:04:11 PM EDT [INFO] [EPAgent] Introscope(R) is a registered trademark of CA.
6/25/13 07:04:11 PM EDT [INFO] [EPAgent] Starting Introscope EPAgent...
0
PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39621808
Why are you not wanting to just move it or delete it?  It is so much easier than writing a script to edit it.
0
 

Author Comment

by:KamalAgnihotri
ID: 39621870
slightwv,

There are several ways of doing a task. What you said is one way, which is correct. I want to find anther way. Any suggestion on "another way" would be greatly appriciated.
0
 
LVL 76

Expert Comment

by:slightwv (䄆 Netminder)
ID: 39621883
Sorry but I don't have a script around to do it the hard way.
0
 
LVL 34

Expert Comment

by:johnsone
ID: 39621891
I don't have one that will delete only specific lines as well.  Also, be aware that messages in the alert log can span multiple lines and not every line has a date stamp.  That makes the task you are trying to do much more difficult.
0
 
LVL 19

Accepted Solution

by:
simon3270 earned 75 total points
ID: 39623906
Editing active files is fraught with problems, particularly if new data arrives while the file is being changed.  Almost all "in-place" editors actually create a new file and rename the new file to have the same name as the old one.  If a process was writing to the old file and keeps the file open during your edit, it will continue to write to that old file, even if it has apparently been deleted. As soon as the process closes the file, all changes since the edit are lost.

You would be better off using something like the "logrotate" program available on many Linux/UNIX systems.  This automates the rotation of log files, optionally compressing old logs, and deleting old logs so that only a specified number are kept.  When it moves a file to a new name (to mark it as "old"), it can optionally run a command which tells the processes writing that log file to close their current file handle and reopen the log file with the original name (often by running "kill -1" on the process).
0
 
LVL 35

Assisted Solution

by:Mark Geerlings
Mark Geerlings earned 75 total points
ID: 39624198
I think the best two ways to solve this problem (at least for Oracle "alert.log" files is either:
1. use a simple shell script to rename the alert.log file each night, just before midnight, to a file name that includes the month and day.
2. use the UNIX log rotate mechanism to rename the alert.log file

In either case, the database will create a new "alert.log" file with the standard name the next time it tries to write to the file.

The same problem exists with the listener.log file for Oracle's TNS listener, but that one is more-complex to rename because of the problem that simon3270 mentioned.  I usually script a three-step process for these: "lsnrctl [name] stop", rename the log file to include the current month and day, then "lsnrctl [name] start" to create a new log file with the standard name.
0
 
LVL 19

Expert Comment

by:simon3270
ID: 39624283
For the listener you can also use the copytruncate option for logrotate.  This blog entry shows how.  The actual control file used in this example is:
$ cat /etc/logrotate.d/oracle-listener
/u01/app/oracle/product/10.2.0/db_1/network/log/listener.log {
weekly
copytruncate
rotate 4
compress
}

Open in new window

"weekly" is how often the rotation is done (you could have other periods, such as "daily", or specify a maximum size).
"copytruncate" copies the old log file to a new one, then empties the original one - when the listener next writes a log entry, it writes to the start of the file.
"rotate 4" keeps 4 old files - if rotation creates a new "old" file while there are already 4 "old2 files,  the oldest is deleted.
"compress" gzips the "old" files to save space.  an alternative is "delaycompress" which leaves the most recent "old" file alone (to make searches easier, for example), but compresses any older "old" files.
0
 
LVL 34

Expert Comment

by:johnsone
ID: 39624345
Actually, with the listener, you don't want to do a stop, rename, start.  This affects incoming connections as the listener is down.  It is for a short period of time, but it is down.

I use this method:

lsncrtl [name] set log_status off
rename listener.log
lsncrtl [name] set log_status on

You may miss a couple of messages, but you will not affect incoming connections.
0
 
LVL 35

Expert Comment

by:Mark Geerlings
ID: 39624354
Thank you, simon3270 and johnsone.  That is two options that I wasn't aware of before for the listener.log file.  I plan to test both of those in non-PROD systems here today.
0
 

Author Comment

by:KamalAgnihotri
ID: 39627460
I like the Logrotate idea and I am going to try that. On Sun Solaris logrorate is logadm. I will keep this question open till I have the script created and tested.
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Background In several of the companies I have worked for, I noticed that corporate reporting is off loaded from the production database and done mainly on a clone database which needs to be kept up to date daily by various means, be it a logical…
Checking the Alert Log in AWS RDS Oracle can be a pain through their user interface.  I made a script to download the Alert Log, look for errors, and email me the trace files.  In this article I'll describe what I did and share my script.
Via a live example show how to connect to RMAN, make basic configuration settings changes and then take a backup of a demo database
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question