Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 340
  • Last Modified:

What's the recommended max # of open files in my situation?

I run a very busy web server with 4000+ domains of which about 10% of them are very active. The tech specs:

1. RH 7.1 + Apache 1.3
2. Dual 1 GHz PIII
3. 4 GB ECC RAM
4. Dual 74 GB SCSI

May I know what's the recommended max # of open files (file-max) in MY SITUATION? In addition, should I increase the max # of open inodes (inode-max)? Please point me to the reference you used to formulate your conclusion. Please don't preach about Apache tuning or anything off topic. Thank you.



0
topwiz
Asked:
topwiz
1 Solution
 
jlevieCommented:
The "recommended max # of open files" is whatever is necessary for your server to operate without encountering application failures as a result of too many open files. If you aren't having problem in that respect now the current number is adequate.

Fiddling with the file-max or inode-max isn't something you do to improve performance. However, gratuitiously increasing those values far above what the server really needs can negatively affect performance by increasing the amount of memory allocated for the data structures to kernel needs.
0
 
Alf666Commented:
For the inodes part, I suggest using another filesystem like reiserfs in which you don't have to bother about inode limits.
Though you should be carefull. No directory with too many files.
0
 
DonConsolioCommented:
for our servers i wrote a perl script to periodically check open files and stuff and send reports in case of error.

basically i do this:

...
open PSTAT, '/proc/sys/fs/file-nr' or warn $!;
my ($filealloc,$filenr,$filemax) = split(/\s+/,<PSTAT>);
close PSTAT;
                                                                                               
my $PCTFULL = int($filenr * 100 / $filemax);

...
if PCTFULL reaches 95% i send mail to the admin mailbox and have it increased in the init scripts as soon as possible.

0
 
pjedmondCommented:
Just a thought, but I suspect that the spec that you have given us will not be the bottleneck. The bottleneck on this setup will mostlinkely be the ethernet connection. This will apply if the majority of files being accessed are over about 5-10K.

The next area that I would expect the bottleneck to be if you upgrade your ethernet card to a 1000/100/10 card would be the SCSI.

Issues here depend on spindle, and hence access/latency timings for the hard drive. Also the RAID array (if used) could vastly increase access speed, especially if you have a fairly high performance setup with a caching RAID array.

Therefore, the first thing that I'd be monitoring would be the ethernet bandwidth usage. If it gets above about 30% of the rated bandwith...i.e. 30Mb for a 100Mb link, then I would start to get concerned. Above that, the number of collisions start to cause problems with service. However, I expect, that the banwith that connects this server to the internet might be more limiting than that?

HTH:)
0
 
pjedmondCommented:
Now you know where to look for the first singn of any problems:).....use ratios to multiply bandwidth up to 30% of that available, and I'd guess that would match up proportionally to a number of the other factors that you've mentioned monitoring tools for. Use that as the threshold for you to start thinking about taking action:)
0

Featured Post

Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

Tackle projects and never again get stuck behind a technical roadblock.
Join Now