Link to home
Start Free TrialLog in
Avatar of santoshi kannan
santoshi kannanFlag for India

asked on

automating the deletion of log files if the mount point director shows 90%

hi All,

Kindly help me on writing the script like if the mountpoint /mnt reaches 90% then it has to delete the weblogic log files on the directory.

am able to get the mount point percentage through the command but struggling to write the script.

pls help me on the shell script
ASKER CERTIFIED SOLUTION
Avatar of woolmilkporc
woolmilkporc
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
how much space do you have available?
use logrotate to manage the amount of data consumed. If you have a specific need for data from the log, you can use logrotate process to manage the amount of space consumed while at the same time process logs to extract data you want and to store it in a DB for quicker review or to generate alerts ....

look at /etc/logrotate.d for configuration examples.
man logrotate
Why not configure weblogic to handle it by setting how many files it should retain?
http://docs.oracle.com/cd/E13222_01/wls/docs91/ConsoleHelp/taskhelp/logging/RotateLogFiles.html
The mechanism is already built-in with the application.

You're not saying which version you have/use.
Avatar of santoshi kannan

ASKER

Hi Woolmilkproc,

Thanks for helping me out on the script. Here am facing one more issue as that we have 3 servers log files to be cleared under /mnt as such below.
/mnt/wls1034/mw/up/domain/cdomain/servers/ps1/logs
/mnt/wls1034/mw/up/domain/cdomain/servers/ps2/logs

have grouped these log file location into a file.cfg , i could pass this file on the script and then delete it.

Also am getting error in rm command
+ xargs rm
rm: missing operand
kindly help.
Hi Arnold,

Am currently using weblogic 11g on the server and less have enabled the logging properties on each servers. in rare cases we are facing the disk utilisation issue and that is the reason we are going for automation on the script to delete the log files. Thanks for helping.
Hi,

the "missing operand" error of xargs might be due to the fact that there's been nothing to delete.
To avoid this message in the future add the "-r" flag ("--no-run-if-empty").

And here is the extended script:

PCT=$(df -P /mnt |awk '!/Filesystem/ {sub("%","",$5); print $5}')
if [[ $PCT -gt 90 ]]; then
   for DIR in "$(<file.cfg)"; do
     find "$DIR" -name "*.log" -mtime +3 |xargs -r rm
   done
fi
Look at the current settings that you have for the logs how far back do you need the logs to be? If you need data from logs, look at log crunching tools that will process the logs extracting the data you want when they are rotated So that you can rotate the logs daily/weekly while retaining 3 or 4 versus more if that is your configuration.

The issue is either the logs or something else, deleting logs as you are contemplating would suggest that something else could be contributing the space consumption such that there might not be any logs to delete when the space is exhausted.

/mnt is a mount point for a temporary mount of partitions. Is it being used for an external USB type drive onto which you are writing your logs?
hi Wool,

there are two issues
1. xargs rm is not working so replaced it with -delete
2. file.cfg is not getting able to read so in the end throws as directory or file not found. so have to replace it as
if [[ $PCT -gt 90 ]]; then
find <dirpath>/file.out* -type f -mtime +3 -delete
find <dirpath1>/file.log* -type f -mtime +3 -delete
find <dirpath2>/file.out* -type f -mtime +3 -delete
fi

When i made some change am able to run the script by scheduling it
Hi Arnold,

As told before have made changes to the logging properties , fewtimes the logs exceeds during the testing phase or lots of builds when we change the logging properties.
during those sometime, the script has to run as automation part.

Santoshi
1. Did you use the "-r" flag of xargs? It will suppress the error message when nothing is found.
Anyway, if "-delete" does the trick it's fine. With a huge amount of "find" results it will take longer than "xargs -r rm", that's all.

2. Sorry, I don't really understand this part. Wouldn't it have been sufficient using

for DIR in "$(</path/to/file.cfg)"; do
...

when file.cfg isn't in the current directory? I thought this was self-evident.

I'm anyway glad to hear that you have a working solution now. Good luck!
it is really happy that the solution given worked fine.