File Descriptors

Hi !

Can a Unix system ever run out of file descriptors ?
Reason for asking is that we have an application that relies heavily on sockets.  We have a SQL server running with 500 connections, a communications application that we have written that if all connections are used would use over 700 file descriptors.  Our client's to connect are threaded with at least 3 connections to the sql server and 3 connections to the comms program.  We seem to get a problem with opening file's and accepting sockets, in various applications we run.  We are just wondering if we have hit any of our system's limits.  We are running a Sun E4500 with 1Gig memory and 4 x 400Mhz with Solaris 2.6.

I would be greatful for any advice.

Regards,

Marvin.
checkinAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

jmcgOwnerCommented:
It used to be true that a program could run out of file descriptors in two ways on UNIX. The per-process limit on file descriptors was originally 20. In systems I'm familiar with, the per-process limit ranges between 64 and 4096, but some variant may have increased that to 64K or larger. The other way a process could be unable to open a file would be because the system-wide open file table was full. I think modern variants of UNIX allocated these dynamically, so you can have as many open files across the system as you have kernel VM to allocate them.

Do you know what value of  errno is associated with the problems you are having opening files and accepting sockets?

I can't say anything definite about Solaris 2.6, though. There's enough experts around here, though, that I expect one will be along soon with an answer for you.

0
seredaCommented:
The answer is YES, surely unix (solaris 2.6 namely) can run out of file descriptors.
You can tune your kernel for heavily loaded system by editing /etc/system file and putting following lines there:

* Softlimit of open files you can have for one process (default = 64)
set rlim_fd_cur = 4096

* Hardlimit of open files you can have for total (default = 1024)
set rlim_fd_max = 8192

There are also many other useful things you can tune there.

You need to restart system for changes in /etc/system to take effect.


0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
checkinAuthor Commented:
You mention there are many other ways to tune the system using /etc/system. Where can I find this information ?

Marvin.
0
seredaCommented:
look at
   man -s 4 system

and at the sun www site (or sun developers site, i don't remember it's URL)

If you need something concrete, ask.
0
checkinAuthor Commented:
thanks !
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
System Programming

From novice to tech pro — start learning today.