optimal number of files per folder on Linux platform?

I need to store about 120,000 pictures on the server for a shopping cart application, obviously they are not going in to one folder, so... I am considering two scenarios, which would be more efficient in your opinion?
1. create 36 subfolders with average=3,100 max=22,000, min=106 images per folder
Top folders will have: 21100, 10612, 6741, 4919, 4496, 4268, 4267, 3756, 3736, 3489 images per folder

2. create 1,100 subfolders with average=100, max=13,000 min=1 images per folder
Top folders will have: 13615, 5280, 3834, 3567, 3195, 2262, 2231, 2127, 1726, 1031

In addition to above images/folders, additional 120,000 images will be uploaded to different folder with the same subfolder structure, those will be the thumbnails, image/thumbs locations will be stored in mysql DB, which scenario will make sense? I want to achieve max performance if there is any, In the near future another 100k-200k images will need to be stored.

Thank you
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

I don't know the answer in any case, but if anybody else has any useful information on it, you'll need to specify what file system you are using

My guess would be that on most filesystems there won't be any difference between your two possible schemes, because they're both two-level lookups, and that putting all 120,000 files in one directory would be slightly faster.  If the filesystem is something really old-fashioned like FAT32 where the directory entries aren't indexed, then I'd go for the 36 subfolders scheme.
CoffinatedAuthor Commented:

I received this response from my web host
"...the way the linux filesystem works, directories with more files are much slower. Your best bet is to try to keep directories under 5000 entries; even then you will see significant performance degradation with some utilities. The absolute maximum of the filesystem is 31998 entries so keep that in mind also..."

Scenario 1 will have 3 folders with more than 5k files
Scenario 2 will only have 1 folder with more than 5k files.

It is ext3 file format, I don't understand why they recommended to keep it under 5k per folder.
That's probably true then with ext3fs.  It does have an indexed-directory option but they might not have it set up that way.  In that case I suppose the 36-subdirectory scheme would be best because the average directory size is less.  You could experiment--renaming files into different directories should be possible if you can do it automatically.

Or maybe somebody here has some real data on ext3fs performance.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux Distributions

From novice to tech pro — start learning today.