more than 32.000 dirs/files possible ?

hi all,

somone know if there is a way to have more then 32xxx (signed int) sub-directories in a dir ?
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

freesourceConnect With a Mentor Commented:
This is a very interesting question.  

Assuming you are talking about the ext2 filesystem there could be 32,000 2 byte files in a directory if there were enough available inodes.  Do a dumpe2fs on the partition you want to use, and check how many free inodes there are.

Because of the design of the ext2 filesystem the smallest amount of space a directory can take up on a standard linux system is 1k.  You could create symlinks, but this would be signed long.

You could hack ext2 with libext2fs, or create a pyschotic version of ext2 by hacking it directly.  Grab yourself the ext2ed package (ext2 editor) and take a tour around the filesystem.

Good reading can be found with the ext2ed package:

Here you can find "Analysis of the Ext2fs structure":

Here is lots of good reading plus the article "Design and Implementation of the Second Extended Filesystem."

Why would you want to do that?
Alas jlevie, I get told off for asking questions to questions in techy forums...  TUT TUT TUT.  But I still usually ask (as you have) WHY on earth do you want to do THAT?

Nick ;-)
The new generation of project management tools

With’s project management tool, you can see what everyone on your team is working in a single glance. Its intuitive dashboards are customizable, so you can create systems that work for you.

And now for something completely different... :)

I began writing this comment stating that more than
32xxx sub directories in a directory aren't possible because of a very simple reason. Every inode (the part that stores information about the file, such as owner, permissions etc.) has a link count. Each sub directory you create has an entry named "..", which refers to its parent directory. Therefore, the link count in the parent directory gets incremented by one for sub directory it contains. I verified that mkdir complains if you try to create more than 32xxx sub dirs, and thought that the link count is probably signed int, but it is *unsigned* (checked man pages and kernel source). Actually, that makes sense, as negative link counts are, uhmm, not really necessary :). Nevertheless, more dir's that (signed int) are not accepted. Perhaps because of downward compatibility with older, less capable filesystems?

On the other hand, access to a directory with that much entries would be very slow. It's better to grow your directory tree not in width but in depth, as does the terminfo database. Look under /usr/share/terminfo (/usr/lib/terminfo?). There are about 35 sub directories with a one-letter (or -digit) name, containing files that begin with that letter. Access is faster that way and equally easy managed.
Well I can think of several reasons why he wouldn't want to have that many nodes in one dir, and I can't think of any reason why it would be mandatory. When I asked why he wanted to do it, it was with the intent of suggesting a better way to accomplish his goal.

Actually the link for "Analysis of the Ext2fs structure" is this:
I got more than 110000 files in my mail queue after spammer attack. Directories limited wit number of inodes in filesystem.
Sounds to me like you don't need more inodes, instead you need more protection for your mail server. If they ran it out of inodes, more would have just taken a bit longer.

Was it "spam" or a DoS attack? That many messages sounds more like a DoS than a spammer.
All Courses

From novice to tech pro — start learning today.