Number of files affecting disk speed / system performance

Basicly the problem is this:
I have massive multi terabyte disks and I stored VMs, BIG backup files and replicated backup files. The problem is these disks get slow, REALLY slow after about 6 months.

Ive narrowed it down to any partition where I use the replicated backup files gets crazy slow. These replications are a 3rd party program that copies data to any destination I like using a single instance store. This can create a massive amount of data objects for a file system (I think?) and I think these massive amounts of objects are what causes the file system response to slow.

Here is an example:
1.      I replicate backup a directory containing 1,000 files.
2.      The destination creates 1,000 files the same is the original.
3.      The next day 100 files change, nothing added or removed (keeping it simple) the replication program creates a new directory on the destination NTFS file system and uses a Single Instance Store to create another 1,000 files, of the 1,000 files, 100 are new because they have changed.
4.      This continues about 14 times as the system creates: daily, weekly, monthly, quarterly and yearly backups. So the number of files I could browse would be 14 x 1,000 = 14,000 browseable files for 1,000 backed up files.

Everything I read says that the number of files on a partition doesnt matter. Wikipedia says the NTFS limit is 4 Billion files.

Since I use these backup jobs to backup multiple servers to a single destination the number of files can be astronomical.

The results seem undeniable, the file systems become turtle slow.
byteharmonyAsked:
Who is Participating?
 
CallandorCommented:
Read item 7 about fragmentation of the MFT and how it can be avoided: http://windowsdevcenter.com/pub/a/windows/2005/02/08/NTFS_Hacks.html
0
 
nobusCommented:
it is the same if you put more and more files into 1 directory.
i had applications creating each time a txt file smaller than 1 kb
once you get over the 10.000 the file, you notice that problem, since each file is listed in the file structure, it must be read, in order to be accessible.
there are only a couple of solutions imo :
-1 keep the number of files as low as you can (eg delete older backups)
-2 use a faster disk i/o system.. to speed it up

0
 
byteharmonyAuthor Commented:
Wow, you guys are great!

I want to leave this open for a little while longer to see if we get any other information. I will certainly test the Callander as the purpose here is to create the large number of files.

I also have a friend who has worked at MS for years and he is going to check if he can make windows faster for this need as well.

Thanks,
BK
0
 
byteharmonyAuthor Commented:
Thank you!
0
 
nobusCommented:
you're welcome !
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.