Link to home
Start Free TrialLog in
Avatar of neortho
neortho

asked on

SQL Server .BAK files - what's in there exactly?

I'm wondering how SQL Server creates its backups.  Does it always put the same table, index, namespace, etc in the same place in the file?  Can normal use or maintenance activity (like defragging, reindexing) radically alter how the .BAK file gets put together?

The reason I'm asking this is because I'm having some issues with a large file (100+GB) and an offsite backup utility.  Here's the current process:
1.  SQL Server makes a backup to local disk.
2.  Backup server copies the .BAK file to its local cache
3.  Backup server compares it to yesterday's backup at block level and records the differences
4.  Differences are uploaded to offsite storage

This system is working great with all our other data, but this one large file is causing some headaches.  At first it can get the daily changes calculated and moved offsite in a few hours, but over a couple months it takes longer and longer until it's taking over 12 hours.  Then we have to start all over and it's a big hassle.

I'm wondering if it has anything to do with the organization of that .BAK file.  I know that the inside of the .MDF where the live DB lives is very dynamic and a single table can get fragmented over hundreds or thousands of places on the physical disk.  If that organization is reflected in the backup, the daily changes could appear to be huge, even if all we did that day was run some maintenance tasks and the amount of data stored remained unchanged.
Avatar of Gideon7
Gideon7
Flag of United States of America image

You are correct in that the MSSQL BAK file works poorly with block-level backup schemes.  This is why the Volume Shadow Service (VSS) on W2K3/W2K8 lets SQL Server implement its own native VSS-writer for backup.

Since you are using a 3rd party backup the best approach is probably to avoid doing a full 100GB backup every night, and instead do daily incremental backups of the transaction log, with full backups at a longer interval.

Example of T-SQL to back up the translaction log:

BACKUP LOG MyDatabase TO DISK = 'E:\MSSQL\BACKUPS\MyDatabase_20081010.trn'
WITH INIT, SKIP, NAME='MyDatabase_20081010.trn',
DESCRIPTION='Backup of log for MyDatabase on 2008/10/10'

A full backup will truncate the log.

To restore, reload the main backup as usual, followed by each log backup starting with the oldest:

RESTORE LOG MyDatabase
FROM DISK='E:\MSSQL\BACKUPS\MyDatabase_20081009.trn'
WITH NORECOVERY -- indicate we want to apply another log

RESTORE LOG MyDatabase
FROM DISK='E:\MSSQL\BACKUPS\MyDatabase_20081010.trn'  -- final log

The log backups will be much smaller, containing only the actual changes since the last full backup.
Avatar of neortho
neortho

ASKER

Thanks for the advice!  I've been meaning to change it to something like that for awhile now.  With all these offsite backup headaches, now might be a good time to go ahead and do it.

I'm still curious as to which DB operations can cause the biggest block level differences from day to day.  Are there any good reference sites you're aware of?  I spent a lot of time searching on Google but couldn't get the results narrowed down enough.
ASKER CERTIFIED SOLUTION
Avatar of Gideon7
Gideon7
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of neortho

ASKER

Exactly what I was looking for, and great advice for altering my backup set.