denvermc
asked on
Log File Management
Platform: SPARC 20
OS: Solaris 2.5.1
Shell: csh
I run a program with the following command to capture standard out and standard error:
program >>&! file.log &
Eventually, file.log gets big and I want to trim it down to the last 500 lines, or so. How can this be done without stopping the program?
OS: Solaris 2.5.1
Shell: csh
I run a program with the following command to capture standard out and standard error:
program >>&! file.log &
Eventually, file.log gets big and I want to trim it down to the last 500 lines, or so. How can this be done without stopping the program?
tail -500 > other_file; cat other_file > file.log
depending on your shell (and/or OS), you may also try:
(tail -500 file.log)|cat >file.log
depending on your shell (and/or OS), you may also try:
(tail -500 file.log)|cat >file.log
ASKER
My experiments have shown that ahoffmann's suggestions will not work because the file is open, as yaiyai suggests. The approach which I have hit upon is to used |& rather than >>&!, to pipe it though another shell script which handles log file management.
I guess ahoffmann's (tail -500 file.log)|cat >file.log
suggestion should work even if the file is open for read/write.
suggestion should work even if the file is open for read/write.
jmohan, my suggestion may work, if
1. the file is not opend using exclusive locks
2. the shell allows this special construct
1. the file is not opend using exclusive locks
2. the shell allows this special construct
Not really possible the way you want it. You would have to stop the process, tail 500 lines to the file.log, then restart the process.
My suggestion is
tail -500 >archive.log
cp /dev/null file.log
Copying null to a log will zero it out without blowing off the processes that are writing to it. It lets you start the log over without having to stop and restart the process doing the logging.
My suggestion is
tail -500 >archive.log
cp /dev/null file.log
Copying null to a log will zero it out without blowing off the processes that are writing to it. It lets you start the log over without having to stop and restart the process doing the logging.
> cp /dev/null file.log
is equivalent (not exactly the same) as
echo "" >file.log
but still have the restrictions as decribed before (lock, etc.)
is equivalent (not exactly the same) as
echo "" >file.log
but still have the restrictions as decribed before (lock, etc.)
ASKER
Stopping the program is not an option. Further testing shows that ahoffmann's answer (tail -500 test.log)|cat >&! test.log works on Solaris with csh. However, sometimes it causes a core dump of the tail command. An examination of the core dump indicates that the tail command fails with:
tail: cannot determine length of %s
tail: cannot determine length of %s
have you tried logrotate?
> tail: cannot determine length of %s
ok, this may occour if tail reads while your process writes to the file. AFAIK there is no way around this than copying the file first (probably not what you want).
Otherwise write a script which checks the result of the tail command, like (not ready for use):
(tail -500 file.log)|cat >file.log
if ($status != 0) then
echo truncation failed with error: $status
endif
if (`wc file.log|awk '{print $1}'` < 500) then
echo tail failed
endif
ok, this may occour if tail reads while your process writes to the file. AFAIK there is no way around this than copying the file first (probably not what you want).
Otherwise write a script which checks the result of the tail command, like (not ready for use):
(tail -500 file.log)|cat >file.log
if ($status != 0) then
echo truncation failed with error: $status
endif
if (`wc file.log|awk '{print $1}'` < 500) then
echo tail failed
endif
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
You can tail -l 500 file.log > another file