Basicly the problem is this:
I have massive multi terabyte disks and I stored VMs, BIG backup files and replicated backup files. The problem is these disks get slow, REALLY slow after about 6 months.
Ive narrowed it down to any partition where I use the replicated backup files gets crazy slow. These replications are a 3rd party program that copies data to any destination I like using a single instance store. This can create a massive amount of data objects for a file system (I think?) and I think these massive amounts of objects are what causes the file system response to slow.
Here is an example:
1. I replicate backup a directory containing 1,000 files.
2. The destination creates 1,000 files the same is the original.
3. The next day 100 files change, nothing added or removed (keeping it simple) the replication program creates a new directory on the destination NTFS file system and uses a Single Instance Store to create another 1,000 files, of the 1,000 files, 100 are new because they have changed.
4. This continues about 14 times as the system creates: daily, weekly, monthly, quarterly and yearly backups. So the number of files I could browse would be 14 x 1,000 = 14,000 browseable files for 1,000 backed up files.
Everything I read says that the number of files on a partition doesnt matter. Wikipedia says the NTFS limit is 4 Billion files.
Since I use these backup jobs to backup multiple servers to a single destination the number of files can be astronomical.
The results seem undeniable, the file systems become turtle slow.