how to find total number of files opened


Today I got an escalation stating the following on one server:

We cannot execute *ANY* commands and looks like it will require a Linux server bounce.

orabi@mialsb02-t1$ top
ksh: top: /usr/bin/top: cannot execute [Too many open files in system]
dwilliams@mialsb02-t1$ ls -l
ksh: ls: /bin/ls: cannot execute [Too many open files in system]

After seeing this, I some how login to the server and increased the "file-max" parameter by executing the following command:  echo "75536" > /proc/sys/fs/file-max
Earlier the file-max value was showing as 65536.  After increasing the file-max parameter, am able to execute the commands and its stopped giving the "Too many files opened error".
However I want to know what is the current status of opened files. I did a count of 'lsof' output (lsof | wc -l) but it shows value less than 8000. I'm not sure if thats the right value, if that would have been the case, I shouldn't be getting "Too many files opened error" since I have so many left. Please let me what is the current way to check the number of files opened in a Linux system and how would I find those.

Who is Participating?
pjedmondConnect With a Mentor Commented:
I suspect that we may be looking at the problem from a slightly too simplistic perspective. The issue here is that linux abstracts most things including devices, sockets and pipes as files.

lsof | wc -l will give you a count of the number of files open, but it is possible for a single file to be opened mutiple times for reading, Each each additional concurrent open will use up your file-max. Connections to ports can also eat into your file-max, although this requires sustained connections as network ports are normally configured to timeout after a specified period of time.

However the good news is that lsof can help us identify where the problem might be, as it has options to analyse specific subcategories of files. Because there are so many ways that this type of scenario can arise, either from an attempted denial of service through to a rogue application, you will need to look your system logs in order to get a better feel for what area the problem might have occurred in. You can then use lsof to get better visibility of files associated with networks/data files/devices/pipes etc. Have a look at:

which give you a number of ways in which the lsof options enable you to break down file activity associated with network activity and file activity.


(   (()
(`-' _\
 ''  ''

The command is
(list open files)

lsof | wc -l

should give you a close enough number of open files

This way you can also see witch process opens that many files ....
sorry ... I didn't read all the way through ...
A proven path to a career in data science

At Springboard, we know how to get you a job in data science. With Springboard’s Data Science Career Track, you’ll master data science  with a curriculum built by industry experts. You’ll work on real projects, and get 1-on-1 mentorship from a data scientist.

ashsysadAuthor Commented:
Hello, I did the same but that value is far less than the file-max parameter (which was 65536).  Then why I got the "Too many files opened" error? Any idea ?

ulimit -u

This will give you the maximum number of files the current user can open

ulimit -u [number]

Will set this param to [number]
When the issue was happening were you able to do a ps command? What does the crontab look like? are there a lot of processes setup to run in there? To find out why this happened you need to start looking over your logs and see what might have been causing the error.
ashsysadAuthor Commented:
Hi Namol, Is there a way that I can find the total count of number of files opened in a system by all users?
The last time I had this (and it was a few years ago now) it was definately ulimit related (someone mentioned this above).

check your settings, eg/
# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 8189
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 8189
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

There are hard and soft limits, check the man page for details `man ulimit`
ashsysadAuthor Commented:
Thanks for explaining me in detail. Now I can figure out where the issue is.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.