Link to home
Start Free TrialLog in
Avatar of thummper
thummper

asked on

How to remove limit on number of files within a folder on a usb external drive

Hi, this question actually forked out of another question. If you like you can see the original at:
https://www.experts-exchange.com/questions/23635590/How-to-set-permissions-for-a-mounted-usb-drive.html?anchorAnswerId=22199216#a22199216

but Ill copy all the pertinent info here.

setup:
I have a WD Passport type usb external hard drive attached, and I want to move roughly 300,000 files to it which I will then take to a windows machine and move off. Each file averages about 200k, with the largest Ive seen at about 1mB. The usb drive holds 250GB.

problem:
When I try to move files over, it copies almost exactly 19550 files (3.8GB) EVERY time and then gives me an error that the disk is full. This is consistant even when pulling from a different source of files, so its not a bad file issue. Originally the filesystem (default straight from manufacturer) was listed as msdos. On suggestion when the issue came up on the other post I did the following:

unmounted, and set:
/dev/sdc1        /sdc1            auto        noauto,user,umask=0      0   0

in fstab. Ran:
mkdosfs /dev/sdc1 -F 32
mount  /dev/sdc1  /disk

and it told me I have to specify file system, so:
mount  -t  vfat /dev/sdc1   /disk

I just ran another test and got the same result, 19546 files (3,8GB) and it thinks the disk is full.
Avatar of Duncan Roe
Duncan Roe
Flag of Australia image

Looks like even VFAT must have a 4G limit on directory size. It is quite old (came with Windows95). You might have to write a script that creates a new folder every 15000 files or so. Or 3.8GB may be the limit of a vfat file system. You could make 4 partitions on the disk which would give you 4 times the storage - not the entire disk but would it be enough? Possibly with secondary partitions as well you could span the entire disk.
The "mkdosfs -F 32" command has formatted the disk as FAT32, not the old FAT16.

These are the known limits to the FAT32 file system:
http://ask-leo.com/is_there_a_limit_to_what_a_single_folder_or_directory_can_hold.html
(although MS declares that the maximum disk size is 8 terabytes, not 2: http://support.microsoft.com/kb/184006)

Accordingly, the USB drive should be capable of holding more than 65.000 files in one folder, not 19.000. Are you sure these files are copied over one by one, on a single file basis, not archived in some sort of structure that may appear as one file to the OS?

If so, then maybe this is a restriction within the Linux support of FAT32 read-write access. Although it sounds strange to me.

If you don't need to access the USB disk from Windows, one easy workaround would be to reformat it as Ext2, Ext3 or Reiser. And even if you need to have access to those files from windows, you might consider installing the ext2fs file system driver, which will grant you this access: http://www.fs-driver.org
SOLUTION
Avatar of proglot
proglot

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of thummper
thummper

ASKER

- Files are individual, not part of some larger structure.
- ext wont work because the point is to move them over to a windows machine
- 65,000 wouldnt be enough either, I need to put in aprox 300,000
- willing to try ntfs, but man mkdosfs doent mention it, so not sure how to do that. Id rather avoid installing a file system driver, manager, whatever if I can avoid it, linux is kinda new to me and it seems like opening up a new pandoras box to close the one I already opened. So, if theres a way to do ntfs, or any other file system with more than

We have worked out a solution on the other side to accommodate unlimited subfolders to putt the files off the drive, so for now I am going to split them into folders with 15000 files in each, but it would really help to have a solution to put them all in one (this is an ongoing process of drop 300k files on, move them over, the next day get the next batch)

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Also there is no way to write to an NTFS filesystem without installing the ntfs-3g driver. mkdosfs or any other "native" linux tool will use the kernel built in NTFS module which, as duncan_roe mentioned above, cannot safely write to NTFS.

There is nothing risky about it though, you can just install it with yum with the command i gave you above. The proccess is totaly automated.
You can also work around it using a combination of tar, gzip and split commands, youll need to find a way to tar the files you want (possibly on the fly , so you dont need disk space), the gzip it, get the gzipped data piped on the fly to split which will create 2GB file chunks.

tar -czf (stuff here) -O | split -b SIZE_OF_SPLIT_IN_BYTES

or something similar

and then assemble the whole thing later on a windows machine with NTFS filing system that can handle big files and big number of files.
Thanks guys. Sorry I had to go absent for a bit. This has been a HUGE help.