Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 5849
  • Last Modified:

Linux File Descriptors ..

Dear All
I have a couple of questions regarding file descriptors (handles) and the command ulimit.
Now the thing is I know about Ulimit..to get a snapshot about the file descriptors on the system I executed the following command :


[root@localhost mail]# cat /proc/sys/fs/file-nr
2520 0 206068
| | |
| | |
| | maximum open file descriptors permitted
| total free allocated file descriptors
total allocated file descriptors since boot

I took the appended descriptions from http://support.zeus.com/zws/faqs/200...iledescriptors
What I do not understand is that how these number are calculated if the total number of allocators since boot is 2520 and the available is 0 how can the
maximum open file descriptors permitted be bigger than that number?
I mean if the first number is used to describe the file allocators of one process in particular which is it ?

Another thing that confused me from the same article is the following phrase :

"In current (2.4+) Linux kernels, file descriptors are dynamically
created as necessary, but cannot be removed or reduced other than by
rebooting the server."

What does that mean, that in 2.4 kernels and above (I am using 2.6) file descriptors hang on to the process untill the system is rebooted..I think there is either something wrong with the way I understood this or ....

Can anyone shed some light on the issues mentioned above please ?
0
http:// thevpn.guru
Asked:
http:// thevpn.guru
  • 8
  • 7
1 Solution
 
fgandeCommented:
Linux kernels version 2.6+ will always report 0 total free allocated file descriptors, as this field is not being used anymore.

The total allocated file descriptors are system-wide and not for one process in particular. If you want to see open files for a particular process, use the "lsof" command. However keep in mind that lsof will also list all library files with ties to processes, so the amounf of open files listed with this command will exceed what you see in your /proc/sys/fs/file-nr file.
0
 
http:// thevpn.guruAuthor Commented:
How can I get the exact number of file allocators in this scenario then ?
0
 
fgandeCommented:
The file /proc/sys/fs/file-nr will report the exact number of file allocators in use (first number), and the maximum available file allocators you can have (last number).
0
What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

 
http:// thevpn.guruAuthor Commented:
But ..the maximum number is the global number..it does not say how much a particular process can have..I.e all process can have MAX file allocators..but how much does a particular process have at current time..and how much is the max number that a one process can have at current time.
0
 
fgandeCommented:
To list the file descriptors in use for a particular process, find its process ID, then list the contents of the directory /proc/<process_id>/fd/ . The amount of files in this directory is the amount of file descriptors in use for that particular process.

To find the amount of open file descriptors one program can utilize, run this command: "grep "#define __FD_SETSIZE" /usr/include/*.h /usr/include/*/*.h". This will return the compile time options a program uses to determine the filedescriptor size. If you need to alter this setting, or find more ways to limit the file descriptor usage, you can check out http://bcr2.uwaterloo.ca/~brecht/servers/openfiles.html for more information on this matter.

I hope this helps :)
0
 
http:// thevpn.guruAuthor Commented:
Alright Check this out:

Total Usage

[root@localhost mail]# cat /proc/sys/fs/file-nr
2640    0       206068


[root@localhost mail]# lsof | grep -c '12483'
1313

[root@localhost mail]# ls -l  /proc/12483/fd/ | grep -c ''
1301

The second and third command clearly indicate the number of file handles associated with process 12483..in the case of lsof we have 12 more handles..do those refer to libraries included..and as for the ls -l  I did a count on the number of file handles included in the folder that refers to 12483...
So I can say with confidence that the number of file handles used by the process and it's threads is 1301 "NOT COUNTING ANY LIBRARY FILES ??"

0
 
fgandeCommented:
Yes.

If you want to make sure, just differentiate the output you get from listing the contents of the /proc/14283/fd/ dir (the files that the symlinks refer to) with the output from lsof. The files not listed in /proc/14283/fd/ but that are listed in lsof should then be the library files (or other instances for that matter, like remote connections or pipes).
0
 
fgandeCommented:
Sorry, a small correction to my last post. Pipes are also listed in the /proc/14283/fd/ directory.
0
 
http:// thevpn.guruAuthor Commented:
The command you gave me to get the number of file allocators per process can be defined as follows :

grep "#define __FD_SETSIZE" /usr/include/*.h /usr/include/*/*.h

/usr/include/linux/posix_types.h:#define __FD_SETSIZE   1024

How is it that the max is 1024 when my process has 1301 could the default value be overwritten at some config file..(not in /etc/security/limits.conf)..the process runs as root user.



0
 
fgandeCommented:
That would depend on which include files the program was compiled with. If it did not use /usr/include/linux/posix_types.h, but rather some other header file with a larger value for __FD_SETSIZE, then it would use whatever limit that header file had defined. I guess you could use strace on the program to figure out what its max filedescriptor size is.
0
 
http:// thevpn.guruAuthor Commented:

[root@localhost backups]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 32764
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
0
 
http:// thevpn.guruAuthor Commented:
[root@localhost backups]# grep "#define __FD_SETSIZE" /usr/include/*.h /usr/include/*/*.h
/usr/include/linux/posix_types.h:#define __FD_SETSIZE   1024

Hm...in this command I did try to search the subdirs...same thing..I will try to search the files of the program...or strace it ?
How could I do that..?
0
 
fgandeCommented:
strace is a powerful, but advanced, debugging tool. It will provide you with huge walls of text, and the text you should be looking for will look something like:

getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=1024}) = 0

I'm guessing the 1024 number will be higher on your program due to the fact that your program has a higher file descriptor limit. The program will likely get/set this limit when it starts, so I don't think it will work to just attach the strace program to a running process.

The process of completely guiding you thru a strace debug session of an unknown program is a too complex task for me without sitting next to you, but check out http://www.redhat.com/magazine/010aug05/features/strace/ for some starting pointers to how strace works.
0
 
http:// thevpn.guruAuthor Commented:
hmm I have checked it ..it definitely is a thing to try..nice thing the strace command I will consider this question closed as it is starting to get out of scope of the initial question as for your efforts..it has been great working with you and your Linux knowledge is great..Hat's Up..Red Hat's Up..lol..you deserve more than the 500 points allocated..but then I cant allocate more..thanks
0
 
fgandeCommented:
Thanks, and good luck with the strace :)
0

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 8
  • 7
Tackle projects and never again get stuck behind a technical roadblock.
Join Now