SQL backup script optimization

A few months ago I began to execute a daily task that will backup my SQL databases to another disk on the machine, which is then copied off-site.  I am now re-configuring my backup solution/routine and have noticed that the original backup script for the database is creating a 300GB file for a particular database but when I run the script and save it to a different location it is only 2GB.  I am assuming that my script is set up to continue to append to the existing file.

That sql command is:
BACKUP DATABASE [master] TO  [E Drive - master] WITH NOFORMAT, NOINIT,  NAME = N'master-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

My primary goal is have a way to restore the database in the event of a disaster or failure.  My secondary goal is to keep it as simple as possible (currently it is a scheduled task that calls SQLCMD and passes an SQL file with the command pasted above.

What can be done to keep the exported/backed up database a true size of the real database?  It makes sense that the true size of the databases is closer to the 2GB than the 350GB as the partion the database is on is less than 100GB.

Thanks in advance!
LVL 1
Evan HinesAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

MrAliCommented:
That doesn't make sense unless your data changed.  SQL Server 2008 ENT and SQL Server 2008 R2 STD both offer 'backup compression' which could be set as a default option by right clicking on your server (or maybe DB) and going to Options, one of the last items.  Compressed databases might be over 80% comperssed, but I don't see 'COMPRESSION' in your backup line.  Put COMPRESSION, anywhere after the WITH command, or just set it up graphically and you'll save substantial space.  Do you need point in time recovery?  Or are you OK with losing a full day's worth of data in case of a data loss incident?

Appending to a backup basically adds your new backup to the same file as your old backup, so you only have 1 file but with multiple backups in it.  You should always be doing a test restore.  Test restore those backups and see what happens, but do it to a different server ideally, or at least, to a different database, mdf, ldf name.  
0
Racim BOUDJAKDJIDatabase Architect - Dba - Data ScientistCommented:
> Confirm the size of your data
> Verify your stats are up to date
> Set up an asynchronous database mirroring
0
Evan HinesAuthor Commented:
MrAli - I am using SQL Server 2005, so I don't know if compression is an option there or not.  I guess my intentions is to have the capability to perform a full restore.  The database contents change constantly as it is the contents of my SharePoint 2003 server.  I know with a traditional backup (non-database) you can have a full and incremental where only the changes are appended.  Personally, I don't care to have the ability to restore back to a particular day's changes, just the entire database in case of an extreme situation.  So if I can have the script essentially be a full backup and not appending to the old backup, I am fine with that.  On that same note, my current plan is OK with losing a full day's worth of data as while the database is updated daily, it's not mission critical.
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

MrAliCommented:
Ok perfect, so you're in a good spot.  Now let's go through your scripts and such.  First thing first, if I recall right, SharePoint utilizes multiple databases, but I only see 1 being backed up.  Also, I don't see your MASTER or MSDB databases being backed up.  MSDB isn't critical, but master is.  

Are you in fact, backing up *all* the necessary databases or are you only backing up this 1?  If you are going to restore this database to a different server, you don't need to overwrite master, but if your whole server goes down and you want to recreate it back on this one, you'll need those backed up too.  If master goes down for example, you're down and out without a backup and will have to recreate it from scratch, which will take time.  


DB Size
Now as far as your database size goes, right click on the DB in question, go to properties.  Look at Size, then Space Available.  Subtract Space Available from Size and that's how big your DB is.  A backup taken in the native SQL backup utility (and in 2005 there is no native compression tool) can be similar size to that.  The best way to test this though, is always do a test restore, maybe just once a month, to make sure everything is OK.  You will sleep a lot better doing test restores.  I highly suggest getting a 2nd machine, even a Windows 7 machine and install the same version of SQL Server on it for test restores.  You don't even need a license to do that since it's for testing purposes only.  

If you absolutely cannot do a test restore on a different machine, you might want to consider installing a 'test restore instance' on the local machine, but that does put more administrative overhead on perhaps a critical server.  You can also do a test restore on the same box but to a different database name and different file path for the MDF/LDF/NDF(if NDF's exist).  Let us know if you need help with that.

Finally, check out the free trial download of RedGate tools.  Their software rocks.  BackupPro, Backup Compression, Virtual Restore, they are all great and cheap for a business:
www.redgate.com
http://www.red-gate.com/products/dba/


If you want to know how to test everything in backups go, you can watch my tutorial video on my blog:
Sys Admins & Accidental DBA Bible Part 2 – Creating, Testing, & Troubleshooting Backups
http://alirazeghi.com/archive/sysadmindba2/
0
Evan HinesAuthor Commented:
You are correct, SharePoint does utilize multiple databases.  I am backing up each database separately (9 in all on that server); I only included one in the original post as the others are the same format with minor differences.

In checking on the database sizes, they are the same (smaller) size as my new backup performed yesterday, not the much larger size that has been compounding.

I will check out those links tomorrow as well as watch your video on how to test the backups.  Once that's done the only thing I need to figure out then is how to prevent my backups from gradually increasing in size (to the point where one database went from 2GB to 350GB in 6 months when the actual database size stayed around 2GB).
0
MrAliCommented:
The backups will not grow to a disproportional size unless you are appending the existing backups to the same backup, or your database grew to a huge amount, OR your transaction log file is in FULL recovery mode, and no log backups have been taken to truncate it.  The Transaction Log is also backed up, that is the most likely culprit.  Contrary to popular belief, doing a full backup does NOT truncate a transaction log file.  Check the size of your LDF.  You are most likely either experiencing huge log file growth set to full recovery mode and not backing it up, or you are appending backups to the same media set.  

Create your script from SSMS without the 'append' option, and right click on the database, check to see the recovery mode (I believe it's the 4th option when you right click and go to properties).  If it's in full mode, change it to simple.  

Let us know!
0
Anthony PerkinsCommented:
>> have noticed that the original backup script for the database is creating a 300GB file for a particular database but when I run the script and save it to a different location it is only 2GB. <<
Isn't it obvious?
BACKUP DATABASE [master] TO  [E Drive - master] WITH NOFORMAT, NOINIT,  NAME = N'master-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10

From SQL Server BOL:
NOINIT
Indicates that the backup set is appended to the specified media set, preserving existing backup sets. If a media password is defined for the media set, the password must be supplied. NOINIT is the default.


You need to change the file name and the option to INIT as in:
BACKUP DATABASE [master] TO  [E Drive - master] WITH NOFORMAT, INIT,  NAME = N'master-Full Database Backup_20111030', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Evan HinesAuthor Commented:
That seems almost too simple of a fix, but simple is good.  If that option is changed to INIT from NOINIT and I keep a daily backup, assuming my backup is good then I am looking to lose at most 1 day of data, right?  

Upon reading your suggestion and researching NOINIT vs INIT, I ran across someone else's backup strategy where they do an INIT at midnight and NOINIT on 1/2 to 1 hour increments throughout the day. So if I wanted to do something like that (for conversation sake) I could just have a script run at midnight doing a backup with INIT and then a slightly different script that would run on the hour with NOINIT, or would it have to be more involved than that?
0
MrAliCommented:
Appending to the backup = NOINIT, but it takes up tons of time.  Setting up transaction log file backups as discussed only backups the transactions needed to roll forward/backward to a point in time, and will take up much less space.  Your initial request was to have the backup size not be disproportionate to the database size, and in this case, your backups will be much bigger.  

If you utilize transaction log backups, it will solve your initial request and give you point in time recover ability.  The only advantage this gives is a quicker restore as each log backup needs to be applied individually, though there are scripts to make it do it in bulk for you.
0
Anthony PerkinsCommented:
I have no idea why we are discussing Transaction Log backups as it appears all the author is trying to do is backup the master database.
0
MrAliCommented:
The author is trying to backup his databases needed to support sharepoint, he doesn't need point in time, but then asked if he should keep NOINIT on and take backups every 30 minutes to an hour.  He originally asked how he could best keep his backup size as close as possible to the database size, thus transaction log backups are the better choice.
0
Anthony PerkinsCommented:
Fair enough, it is just that I saw the following backup command in the original question and I did not see the relevance of a Transaction Log backup:
BACKUP DATABASE [master] TO  [E Drive - master] WITH NOFORMAT, NOINIT,  NAME = N'master-Full Database Backup', SKIP, NOREWIND, NOUNLOAD,  STATS = 10
0
MrAliCommented:
Yea, taking tlog backups of masterDB is really overkill, I agree.  The Master DB, unless someone accidentally populated it with large tables, should be very small in size and full backups should be more than sufficient.  
0
Anthony PerkinsCommented:
>>Yea, taking tlog backups of masterDB is really overkill, I agree. <<
No, not overkill:  Impossible.  It has Simple Recovery Model.
0
MrAliCommented:
Just for fun I made my Master DB to Full recovery mode and it worked.  I then executed:
select DATABASEPROPERTYEX('master', 'Recovery') which replied back that it is in fact in full recovery mode.  I then tried to take a tlog backup, and it failed.  Funny how MS let's us set the Master recovery model to FULL but then doesn't allow tlog bakups.  Fail.
0
Evan HinesAuthor Commented:
Thank you for the suggestion of changing NOINIT to just INIT as that has produced the desired results.  Also, in evaluating our setup, needs and use of that database, I will forgo the set up of transaction log backups as the time these backups will need to be used will be hardware malfunction or disaster recovery and not to reverse user actions.  I will maintain multiple backup copies of my database for different days as the sum total is around 2GB.   I will also begin the testing of my backups to help prepare for the unexpected.
0
Evan HinesAuthor Commented:
While MrAli supplied great information on how to be a good DBA, the true solution to my problem was submitted by acperkins.  The other sub-question about doing point-in-time backups was, like mentioned in the post, just for conversation sake and not implemented at this time.  Thanks for all the help!
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.