I'm wondering how SQL Server creates its backups. Does it always put the same table, index, namespace, etc in the same place in the file? Can normal use or maintenance activity (like defragging, reindexing) radically alter how the .BAK file gets put together?
The reason I'm asking this is because I'm having some issues with a large file (100+GB) and an offsite backup utility. Here's the current process:
1. SQL Server makes a backup to local disk.
2. Backup server copies the .BAK file to its local cache
3. Backup server compares it to yesterday's backup at block level and records the differences
4. Differences are uploaded to offsite storage
This system is working great with all our other data, but this one large file is causing some headaches. At first it can get the daily changes calculated and moved offsite in a few hours, but over a couple months it takes longer and longer until it's taking over 12 hours. Then we have to start all over and it's a big hassle.
I'm wondering if it has anything to do with the organization of that .BAK file. I know that the inside of the .MDF where the live DB lives is very dynamic and a single table can get fragmented over hundreds or thousands of places on the physical disk. If that organization is reflected in the backup, the daily changes could appear to be huge, even if all we did that day was run some maintenance tasks and the amount of data stored remained unchanged.