?
Solved

Backup Strategy for a multi-Terabyte database with multiple filegroups and mirroring

Posted on 2011-10-20
2
Medium Priority
?
971 Views
Last Modified: 2012-05-12
This question is a little complex, but I'm hoping someone has some good answers to these questions:

We have a 4 TB database in Microsoft SQL Server 2008 R2 SP1 that holds pictures as a BLOB.  We have split it into over 50 filegroups, each filegroup having 2.5 million photos.  This is done automatically with horizontal partitioning.  We simply add a new filegroup and split the partition when we are approaching the end of our current parition.  Once a photo is in the database, it does not change after about a week so we set all but our most 2 recent filegroups to readonly .

What we want to do is create file group backups of each individual read only filegroup and ship them away.  We then want to use the READ_WRITE_FILEGROUPS option on full backups to backup all read/write filegroups and the primary filegroup.

The problem I have is when I backup the read only filegroups individually, it seems to backup the transaction log of the entire database with each readonly filegroup.  Since these filegroups have been reaonly for a long period of time, I'm not sure why the transaction log is getting backed up since no transactions from the log could possibly pertain to the filegroup backup.  It's also increasing the size of our backups pretty substantially.

Should we possibly be backing up the read only file groups with Copy_Only?
Should we be concerned about all the read only backups being in the same backup set?
Is there any other specific option we should be using when backing up these file groups?
Is there anything outside the norm that I should know to utilize the above strategy?

The difficulty is that it's hard to test anything since each file group is so large.

Keep in mind, mirroring is turned on (ASYNC).  I would love to go to the simply recovery model as we could do once a day backups on our read/write filegroups and a day of photos lost would be acceptable in an emergency situation, but mirroring forces us to stay with the Transaction log backups.

Any tips would be greatly appreciated!
0
Comment
Question by:CyprexxDev
2 Comments
 
LVL 28

Accepted Solution

by:
Ryan McCauley earned 2000 total points
ID: 37007577
COPY_ONLY could definitely be an option for your read-only backups - it doesn't appear to impact the backup chain at all, but would allow you to restore a single filegroup in case of a disk failure. Especially since your filegroups can't be changed after the backup is taken (if they're set to read only), you should be safe with a COPY_ONLY backup.

Another option is more frequent backups of your database log file. Each filegroup backup will include the transaction log because SQL Server wants to make sure it can roll forward or backward any transactions that aren't fully committed to the database yet - since some of those transactions could apply to the filegroup you're backing up (even though you, as the DBA, know they don't - SQL can't be sure), SQL Server backs up the entire log with the filegroup backup.

However, if you set your transaction log to back up by itself more frequently (not just along with your full backup), you can keep the size small and manageable, and then a copy included with your individual filegroup backups wouldn't be as big a problem. Since you're already backing up filegroups that are in the range of 80GB each, a few GB of log file (whatever has happened since your last log backup) shouldn't impact your storage needs too greatly.

Last, make sure your backup plan includes testing what you're taking! I can't count the number of times I've seen a backup plan that looks amazing on paper and is executed with unflappable precision every day for years, but then when it's needed, it turns out it's missing something critical and they didn't know it because it was never tested. Even though the storage needs are pretty intense, find a place on your network (or buy a place if you don't have one already - given your needs, a staging server hardly seems out of the question financially) where you can test the restore of your entire database on a regular basis. In addition to answering the question about whether you're including everything you need to make a good restore, you'll also answer the question about how long a from-scratch disaster-recovery would take.
0
 
LVL 1

Author Closing Comment

by:CyprexxDev
ID: 37007764
I appreciate the response.  This helps greatly.

Yes, financially, we have the ability to create a test system for restore purposes.  What I'll likely do it restore the Primary and read/write filegroups and try to restore a couple of the read only backups every now and then.  If I'm able to then use the system with the photos I've restored, I'll know it's working correctly.

We do backup transaction logs frequently (once every 10 minutes) but currently we are in a transition stage going from a single filegroup system to the one I've explained.  Due to this, our transaction logs are huge since we are converting the system and have 100s of GB a day being transferred causing huge Transaction Log growth.  If I wasn't mirrored, I'd set it to simple right now to eliminate the need of truncating the log (which we do by backing up the log to 'NUL;').

But this really helps - especially the answer about the Copy_only use.  I think that solves the problems we faced.  Thanks for your help!
0

Featured Post

Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

An alternative to the "For XML" way of pivoting and concatenating result sets into strings, and an easy introduction to "common table expressions" (CTEs). Being someone who is always looking for alternatives to "work your data", I came across this …
Ready to get certified? Check out some courses that help you prepare for third-party exams.
This tutorial will walk an individual through the steps necessary to enable the VMware\Hyper-V licensed feature of Backup Exec 2012. In addition, how to add a VMware server and configure a backup job. The first step is to acquire the necessary licen…
This tutorial will walk an individual through the process of configuring basic necessities in order to use the 2010 version of Data Protection Manager. These include storage, agents, and protection jobs. Launch Data Protection Manager from the deskt…

839 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question