Link to home
Start Free TrialLog in
Avatar of techvillage
techvillage

asked on

What is the maximim number of files you can have in a directory/volume?

Hello - The maximum number of files you can have in an NTFS volume is 4,294,967,295

What is the maximum for Linux?

Many thanks,

Richard
Avatar of xDamox
xDamox
Flag of United Kingdom of Great Britain and Northern Ireland image

Hi,

That purely depends on the size of the files.
> That purely depends on the size of the files.
Not at all.

You may have as many files in directory as You want. But note, that the more files in dir, the slower it operates.
Also, You will not exceede the files limit put on filesystem, to check the limit: df -i
Avatar of techvillage
techvillage

ASKER

Thanks chaps,

xDamox - My opening line,  "The maximum number of files you can have in an NTFS volume is 4,294,967,295" was to demonstrate that I would like to know the theoretical limits of of the number of files in a dir that Linux can handle.  I'm a Windows SA, and what I know about Linus I could write in the space of a postage stamp.

Ravenpl - The Linux file server I have FTP access too is at a managed hosting company.  I have SSH access, and so ran the command df -i but don't understand the results, which I paste below :-

Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/sda7             513024   84052  428972   17% /
/dev/sdb1            58490880 10911300 47579580   19% /homepages
/dev/null            2399040    4188 2394852    1% /tmp

Many thx.

On root filesystem You can create total of 513024 files/directories/etc. There is already 84052 created. You can create 428972 more files (even all in same directory), but You will not jump over the limit of  513024 for whole filesystem.
The limit is set while filesystem creation and can't be tuned later.
Also, for reiserfs/reiser4 there is no such limit, however it's ext3 that is standard.
Depends on the kind of filesystem

We ran a few weeks ago against its limit on a webserver with ext3 that creates static html files from a publishing cms.
32000 files in a directory if it is ext2 of ext3 filesysytem.
ravelpl, sorry, but that's bollocks.

The number of files is directly related to the number of inodes, which is decided at time the of format. Obviously if you have huge files you eat up the disc space way before you can reach the maximum number of files, but you get my point theory-wise. NTFS is frankly in the dark ages in comparison to modern nix filesystems. Whilst yes, ext filesystems will suffer performance drops having large numbers of files in the same directory (large as in LARGE), Reiser purely by its design does not have the same problem. That being said, NTFS will start coughing and wheezing long before even Ext starts breaking a sweat. Ext is not "standard", it's just the most popular, there's a difference.

If you should require a filesystem which exceeds Ext/Reiser etc, think about Sun's ZFS, or Novell's NSS. In my work environment we have a number of TB+ NSS volumes containing countless MILLIONS of small files created by an army of admin staff over a 15yr+ period, yet the filesystem response remains sharp as a razor.
The ext2 inode specification allows for over 100 trillion files to reside in a
single directory, however because of the current linked-list directory
implementation, only about 10-15 thousand files can realistically be
stored in a single directory.
ext3 : http://lwn.net/Articles/187321/
http://www.forensics.nl/filesystems
http://en.wikipedia.org/wiki/Ext3
   1. The maximum number of inodes (and hence the maximum number of files and directories) is set when the file system is created. If V is the volume size in blocks, then the default number of inodes is given by \frac{V}{2^{13}}, and the minimum by \frac{V}{2^{23}}. The default was deemed sufficient for most applications.
   



alextoft: stop parroting!
I have't said anything other than You (ok, ext3 is not a standard - hence RH allows You to choose between ext2 and ext3 ?)
The Q was about the limit, remember that one may create files with zero size - what's the limit then?
I already pointed out, that the larger number files in dir - the slower it operates. Also I stated, that reiser has no such limit. (forgot about faster operations, but tests shows that reiser was overrated).
Thanks all.

The reason I ask the question. I have a series of IP security camera's. Some of them are detecting motion up to 18 hours a day. Each camera sends one image a second while motion is deteced. Each camera uploads via FTP into it's own dedicated folder.

One camera, which detects motion constantly over 18 hour periods (60x60x18) produces 64,800 image files, each of which are 15kb.

Each image needs to be kept for a 5 day period, after which it will be deleted.

So, I'm trying to find a file format that will comfortably hold and list 250,000 image files of 15kb, in a folder.

From my understanding of your comments, and some of your comments were very technical - please remember I'm not a Linux person, I just need a simple language comparison of all the formats.

From your comments above, this is my understanding:-

EX2 - Capable of over 100 trillion file can be kept in a folder, however the index to list them can onluy comforatble list 10-15, 000.... Why have a system that can hold some many files but not able to list?...

EX3  - 32,000 files per directory

Reiser FS - Is much faster that EX2 and Ex3 with smaller files.

Knwoing all the above - which file system is best.  I know NTFS, I just don't know any of the Linux file systems.

Many thanks.
EXT2 and EXT3 are no different in this matter. It's just for 32K files You in real performance trouble.
Reiser is really faster if You mean to keep large number of files in one directory(the filesize does not matter). Surely for 10K and more.
I use a reiser file system with a similar setup (ip cameras) although not on such a large scale - Works quite well.  I had run it on Ext2 previously, and when the file number went above (an unknown large number, assume 10,000ish) I could not perform any mass operations on them (rm, ls *.jpg etc)

When that happened I had to use a find string to delete them one by one.

go with reiser.

Avatar of Arty K
techvillage, if you have choice of underlying filesystem for your FTP server, I recommend you to use XFS.
Read here for specification: http://oss.sgi.com/projects/xfs/

It uses 64bits for inode number, so you may have up to 2^64 files and directoryes per filesistem. Thats 18 446 744 073 709 551 616, enough for you?
There are also some other advantages (on concurrent writes).
Oops, not 2^64 but 2^63 or 9223372036854775808 files are OK for XFS :-)
Thanks all  - is there a command I can run over SSH that can interogate the Linux server to find out what the FS is?
ASKER CERTIFIED SOLUTION
Avatar of ravenpl
ravenpl
Flag of Poland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Apologies for delay in coming back. I have awarded the point to ravenpl. Thx Ravenpl.