Can a Unix system ever run out of file descriptors ?
Reason for asking is that we have an application that relies heavily on sockets. We have a SQL server running with 500 connections, a communications application that we have written that if all connections are used would use over 700 file descriptors. Our client's to connect are threaded with at least 3 connections to the sql server and 3 connections to the comms program. We seem to get a problem with opening file's and accepting sockets, in various applications we run. We are just wondering if we have hit any of our system's limits. We are running a Sun E4500 with 1Gig memory and 4 x 400Mhz with Solaris 2.6.
I would be greatful for any advice.