Link to home
Start Free TrialLog in
Avatar of Waterstone
Waterstone

asked on

Something is eating disk space

Hello,  
 
New to Linux.  Something is eating disk space at about 0.82 GB an hour.  This will continue until disk is at 100%.  A reboot frees all of the phantom disk usage and we are back to 18% used.

I check the disk space using WebMin.  It is the Disk mounted as
/ (Root filesystem)       Linux Native Filesystem (ext3)       Partition labelled /1

We are running:
CentOS Linux 5.5
Webmin version 1.510
MySQL version 5.1.52
Apache version 2.2.17
PHP - don't know the version
Zimbra email server

I've tried restarting MySQL, Apache, VNC & Zimbra.  No change
Rebooting is the only thing that will help.

I know you cannot diagnose the issue without access to the server, but has anyone run into this issue?  Any ideas of where to look?  

Thanks

Avatar of arnold
arnold
Flag of United States of America image

Look in /var/tmp
/tmp
Those are the two locations for temporary storage I think /tmp
/var/log
do
cd /
ls | while read a; do
du -ks $a
done
This will narrow the path down to where the growth is.
You can then repeat the above to further narrow down where the growth is seen
cd 'to the directory where the increase is seen"
repeat the above.
post df -k to see the mount points that you have


ASKER CERTIFIED SOLUTION
Avatar of omarfarid
omarfarid
Flag of United Arab Emirates image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
a slight correction, you omitted the reference to the file in the find command which should be
find / -newer /tmp/myfile -exec ls -l {} \;

The problem I see with the above is it will scan the entire system while my suggestion would first narrow down to where the growth is seen, and then you can look to see what is writting/creating those entries.
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Another approach:

you can run
lsof  | gawk '{if (($4 ~ /[0-9]+w/) && (  $5 ~ /REG/ ) ) print $0}'

This will tell you which files are being written at this moment.
ls -l `lsof  | gawk '{if (($4 ~ /[0-9]+w/) && (  $5 ~ /REG/ ) ) print $0}' |gawk '{print $NF}' | sort -u` | sort -n -k5

the above command will tell you all the currently written file's size, and sort them by their size. You should check the bottom line file
It's unusual that the space os restored after a simple reboot; most files would still exist after the reboot, even those in /var/log or /tmp. Try stopping services (not restart) and checking after each to see if the space is restored.

Sometimes installations of databases like mysql include an optimize step as part of the startup that might recover some disk space, but even that;s a stretch unless you have an app that's doing a lot of insert/delete traffic between restarts. .82G/hour would imply a lot of traffic to the database.
I've had a problem like this before and it was a real pain to figure out.  To tell the truth I'm not sure how we figured out what file it was.

In our situation somebody created a script that was exeucted at boot time to delete a file.  The problem is that the file was a log file that was in use.  So, you can't ls it any more, but it continunes to grow and grow.  When you re-boot, actually stop the service that was writting to the log file, the file file is then really deleted and the space freed up.

Again, I know we narrowed it down to the directory, but I can't remember how we found the exact file.
The lsof method that was referenced in post http:#35111628 by jackiechen858 is one.
The other is look at all the services that start.
Another is narrow down the list to the directory where the space if being taken up and then run lsof +d <directoryName> this will tell you want process has what files open.
i.e. if the space you are missing is in /var/tmp
lsof +d /var/tmp will display all the processes that have an open filehandle into that directory
Then you could look at which file is not showing up in /var/tmp from the list.
I've seen issues similar to this, but the space did not came back on reboot i.e. the reference to the filename was being cleared from the filesystem, but it was incomplete i.e. it was not freeing the space.
It's certainly possible an app is creating and extending a file quickly, deleting it when it exits in response to the signal it receives when the system's shutting down. Stopping services one at a time, folowed by remaining processes, might identify the culprit.

You could also install a tool like iotop to see what process is doing the most I/O. Process accounting can also help with similar stats.
it sounds like /tmp is filling up which is why the reboot "fixes" the problem as /tmp is cleared on bootup this is likely the case if /tmp is not a separate file system.

You could create a separate file system for /tmp and mount this as /tmp.

If you need assistance you need to respond to some of the questions already posted.
Avatar of Waterstone
Waterstone

ASKER

Sorry I let this question dangle.  Been fighting fires.  We found the file was a VNC error file that was filling up because of a syntax error in the VNC config file.  Fixed that and we are good.  

Thanks for all the replies.
Welcome