Something is eating disk space

New to Linux.  Something is eating disk space at about 0.82 GB an hour.  This will continue until disk is at 100%.  A reboot frees all of the phantom disk usage and we are back to 18% used.

I check the disk space using WebMin.  It is the Disk mounted as
/ (Root filesystem)       Linux Native Filesystem (ext3)       Partition labelled /1

We are running:
CentOS Linux 5.5
Webmin version 1.510
MySQL version 5.1.52
Apache version 2.2.17
PHP - don't know the version
Zimbra email server

I've tried restarting MySQL, Apache, VNC & Zimbra.  No change
Rebooting is the only thing that will help.

I know you cannot diagnose the issue without access to the server, but has anyone run into this issue?  Any ideas of where to look?  


Who is Participating?
omarfaridConnect With a Mentor Commented:
you can do the following

touch /tmp/myfile
sleep 30
find / -newer -exec ls -l {} \;

this should give you files that are growing in size

you may run

cd /
du -k | sort -rn | more

Look in /var/tmp
Those are the two locations for temporary storage I think /tmp
cd /
ls | while read a; do
du -ks $a
This will narrow the path down to where the growth is.
You can then repeat the above to further narrow down where the growth is seen
cd 'to the directory where the increase is seen"
repeat the above.
post df -k to see the mount points that you have

a slight correction, you omitted the reference to the file in the find command which should be
find / -newer /tmp/myfile -exec ls -l {} \;

The problem I see with the above is it will scan the entire system while my suggestion would first narrow down to where the growth is seen, and then you can look to see what is writting/creating those entries.
Cloud Class® Course: Microsoft Windows 7 Basic

This introductory course to Windows 7 environment will teach you about working with the Windows operating system. You will learn about basic functions including start menu; the desktop; managing files, folders, and libraries.

omarfaridConnect With a Mentor Commented:
thanks for the correction :)

Yes, but this command will run few times till find the file(s) growing
Another approach:

you can run
lsof  | gawk '{if (($4 ~ /[0-9]+w/) && (  $5 ~ /REG/ ) ) print $0}'

This will tell you which files are being written at this moment.
ls -l `lsof  | gawk '{if (($4 ~ /[0-9]+w/) && (  $5 ~ /REG/ ) ) print $0}' |gawk '{print $NF}' | sort -u` | sort -n -k5

the above command will tell you all the currently written file's size, and sort them by their size. You should check the bottom line file
Hugh FraserConsultantCommented:
It's unusual that the space os restored after a simple reboot; most files would still exist after the reboot, even those in /var/log or /tmp. Try stopping services (not restart) and checking after each to see if the space is restored.

Sometimes installations of databases like mysql include an optimize step as part of the startup that might recover some disk space, but even that;s a stretch unless you have an app that's doing a lot of insert/delete traffic between restarts. .82G/hour would imply a lot of traffic to the database.
I've had a problem like this before and it was a real pain to figure out.  To tell the truth I'm not sure how we figured out what file it was.

In our situation somebody created a script that was exeucted at boot time to delete a file.  The problem is that the file was a log file that was in use.  So, you can't ls it any more, but it continunes to grow and grow.  When you re-boot, actually stop the service that was writting to the log file, the file file is then really deleted and the space freed up.

Again, I know we narrowed it down to the directory, but I can't remember how we found the exact file.
The lsof method that was referenced in post http:#35111628 by jackiechen858 is one.
The other is look at all the services that start.
Another is narrow down the list to the directory where the space if being taken up and then run lsof +d <directoryName> this will tell you want process has what files open.
i.e. if the space you are missing is in /var/tmp
lsof +d /var/tmp will display all the processes that have an open filehandle into that directory
Then you could look at which file is not showing up in /var/tmp from the list.
I've seen issues similar to this, but the space did not came back on reboot i.e. the reference to the filename was being cleared from the filesystem, but it was incomplete i.e. it was not freeing the space.
Hugh FraserConsultantCommented:
It's certainly possible an app is creating and extending a file quickly, deleting it when it exits in response to the signal it receives when the system's shutting down. Stopping services one at a time, folowed by remaining processes, might identify the culprit.

You could also install a tool like iotop to see what process is doing the most I/O. Process accounting can also help with similar stats.
it sounds like /tmp is filling up which is why the reboot "fixes" the problem as /tmp is cleared on bootup this is likely the case if /tmp is not a separate file system.

You could create a separate file system for /tmp and mount this as /tmp.

If you need assistance you need to respond to some of the questions already posted.
WaterstoneAuthor Commented:
Sorry I let this question dangle.  Been fighting fires.  We found the file was a VNC error file that was filling up because of a syntax error in the VNC config file.  Fixed that and we are good.  

Thanks for all the replies.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.