Ideal max files/directories per directory

Hi guys,

I'm using Red Hat Enterprise and would like to know what the ideal maximum number of files/directories per directory is. Currently we have a directory with over 7000 directories in it. For various reasons we now have to restructure this directory, and an important factor in deciding how to do this will be the ideal number of files/directories per directory.

A few years ago I was doing a similar project but on a Sun Solaris system, and the sys admin there told me the ideal max inodes per directory was about 200.

If anyone knows what the figure would be for Red Hat Enterprise, your advise would be very much appreciated.

Who is Participating?
jlevieConnect With a Mentor Commented:
It probably doesn't matter much that the access is mostly read, nor should it matter as to the total data size, with respect to directory size. What would matter most is number of file opens per unit of time. A good hash scheme reduces the time required to locate a file for reading and as a bonus reduces the directory overhead associated with file creation.

> - our sys admin has said the 7000 dirs in one dir may be responsible for crashing our backup software

I don't know what you are using for backups, but I've never had any problems with Solaris ufsdump or Legato Networker with far larger directories than that. I discourage people from creating really big directories, but sometimes they still do ('til I find out about it).
I don't know where the so called sys admin got his information from, but there is no such limit.

I've seen performant filesystems with a lot more files/directories within another directory.
I don't know if there is an "ideal number" of nodes within a directory for Linux or Solaris, but 7000 in one directory seems like a lot. I suspect that the the "ideal number" is largely determined by the way those nodes are used. An application that only accessed the directory at startup wouldn't suffer as much as one that was continuous accessing the directory. For the later apps breaking the single large directory in to some sort of hashed structure would be a big help.
rangi500Author Commented:
Thanks for your feedback guys.

pYrania, apparenly there was an 'optimal' number of files per directory - perhaps a 'most efficient' number?

jlevie, yes we are planning to create a better structure, possibly hashed, but one of the main things we need to know to make the decision on how we do it is how many directories to put in each directory. If 7000, sounds like a lot, what sounds like "not a lot" - 1000? 100?

Some facts that may or may not be relevant:

   - the folders are read from much more than they are written to
   - the folders make up about 500GB of data
   - our sys admin has said the 7000 dirs in one dir may be responsible for crashing our backup software


Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.