Solved

SQL Backup - Need someone to verify something or tell me I'm nuts

Posted on 2009-05-14
34
851 Views
Last Modified: 2012-05-07
Not really an SQL guy here and I wanted to run this by someone that knows more about it than I do.  I'm pretty sure I figured out what's wrong but fixing it is another matter.

I have a database called "PracticeManager".  As part of the nightly maintenance it performs a self-backup to the local drive on the server which is pretty straight forward.  

1. This database is also backed up to a SonicWALL CDP 2440i (basically a fancy backup server).  The backup server backs up the logs for the database every 2 hours.  

2. The database itself up backs up every night at 2am.  

3. After the database backs itself up to the local drive the backup server will not back anything else up from that database until another full backup is performed. Aat this point I'm running full backups every day.  Not exactly ideal since I'm eating up an enormous amount of space to have backup history going back more than a few days.

So...I suspect that because the job is executing the "DUMP TRANSACTION PracticeManager WITH TRUNCATE_ONLY", the backup server doesn't know what to do with it because the log file has changed since the last time it got backed up.   It needs to wait until it does a another full backup again so it knows where to start again.

I've considered getting rid of the self backup altogether but I'm not sure what effect that will have on the database, if any.  Also if it doesn't perform a self backup then the end users get errors every time they log into the application that uses this database which generates more phone calls in addition to making me look like I'm not doing my job. :)

Does my theory hold water?  Anyone have any ideas how I can modify this job or even do anything else out of the blue to keep this job from interfering with the functionality of the backup server?

I've included the code the job runs every night.

Any help is appreciated, thanks in advance.
EXECUTE master..xp_sqlmaint '-D PracticeManager -WriteHistory -BkUpOnlyIfClean -CkDB -BkUpMedia DISK -BkUpDB -UseDefDir -DelBkUps 1DAYS -BkExt "BAK" -VrfyBackup -UpdOptiStats 15'

 

DUMP TRANSACTION PracticeManager WITH TRUNCATE_ONLY

 

exec gJobHistory 'Backup - PracticeManager'

Open in new window

0
Comment
Question by:j1akey
  • 16
  • 7
  • 6
  • +2
34 Comments
 
LVL 50

Expert Comment

by:Steve Bink
ID: 24401744
You've posted to the wrong zone.  This should be MS SQL Server, not MySQL.  I have generated a request to re-zone this question.

I'm not too expert on MSSQL, but your analysis sounds correct.  If I'm not mistaken, DUMP TRANSACTION and BACKUP LOG are synonymous, which means that your log state will be altered.  Still, wait for some more knowledgeable experts to chime in.

If you wanted to create a local backup without impacting the log state, use the COPY_ONLY modifier, or schedule to take the database offline, copy the mdf/ldf files, then bring the database online.  
0
 
LVL 31

Expert Comment

by:RiteshShah
ID: 24401951
I guess you should not do this much long process, what exactly you need is, take full database backup every 24 hours, transaction backup at every 5 or 10 minutes and differential backup at every 1 hours. you may delete 24 or 48 hours old backup and you can do this with atomize procedure. that's all you need. nothing more.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24401957
How about we do both ?

1) You're Nuts

2) We can verfiy


No, you are not nuts, but those internal SQL backups are very important. Maybe not that particular one because that is trying to shrink the transaction log back to it's smallest state - which means big pressure during the day as transactions are being logged.

What kind of recovery model are you running ? Can you please verify it is SQL 2005 ?
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24402481
Ok, assuming this is SQL 2005 -- and what flavor? Express, Standard, Enterprise?

DUMP TRANSACTION was more SQL 6.5 (http://support.microsoft.com/kb/165918) -- majorly not supported anymore.

Backups, recovery models, space, time and all  the rest are determined by your tolerance for data loss.

Once you get back to us we can advise of which way to go.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24403928
the DUMP is deprecated - there for compatability reasons only. Similarly the BACKUP LOG with TRUNCATE is going to be removed.

Database is made up of two (can be more) files - the data file is the mdf and the transaction log is the ldf. The backup is a vital part of SQL server health. And we are talking about the backup facility within SQL Server itself.

The transaction log serves a very important part of a recovery process in some situations and depends on the recovery mode of your database. If running "SIMPLE" recovery mode then transactions are minimally logged and the transaction log file is kept pretty clean and small all by itself. If running "FULL" recovery mode then individual transaction are logged in the transaction log and then you need to manage the log file.

The idea behind FULL recovery mode (and is the default for a database in 2005 and 2008) is that you can recover up to a point in time by applying transaction log backups to the last full backup. Whereas SIMPLE recovery is a matter of going to the last full backup.

FULL recovery mode does need management, and the best way is to use maintenance plans. Run a full backup every night, and then run transaction log backups during the day at regular and frequent intervals - most common are multiples of 15 minutes (ie every quarter hour, half hour, hourly etc). That helps keep the transaction log fairly small and clean.

The maintenance plans can be scheduled to run automatically (not in the Express edition, but there are alternatives). So, can be a case of set and forget (well set and monitor really).

If you are seeing growth in the transaction log, then it sounds like you are not running frequent enough transaction log backups.

You could change the database to simple recovery, but then in the event of a database failure, your only protection is the last full backup, and is ill-advised for a normal OLTP type operation. If your business depends on the information, then you will want high availability and high recoverability (meaning FULL recovery mode).

So, if you do not already have them, and it is not the Express edition, then create three new maintenance plans.
1) Full nightly backup - is a full backup run each day / night - could even run multiple times during the day if the database is smallish.
2) Transaction log backup - if running FULL recovery mode (otherwise do not worry about it).
3) Database Maintenance Tasks - run periodically to update statistics, rebuild indexes, shrink / resize databases etc - normally run at a period where there is high volume turnovers such as an end of month purge / archive or similar - keeps database running efficiently - can run weekly over the weekend if you have some downtime available and regardless of activity.

In the meantime, run a FULL backup and set up those maintenance plans. Maybe and only momentarily, change to SIMPLE recovery mode until you have your plans in place. This should not take long to set up and is done in SSMS (SQL Server Management Studio). Changing to Simple and then Back again will give you a "clean" log, but it will not be optimised for size and growth. Ideally you size your database files according to the type of activity so that disk can be pre-allocated and then allowed to fill up that allocated space. These files can also autogrow, but is not desirable for all kinds of fragmentation issues and ensuing IO performance impacts.

You can also inspect and shrink your files by right clicking on the database, tasks, shrink, files  and a window will popup. There is a drop down selection for looking at either the data file or the log file. The down the bottom, you can select some actions - best to use middle radio button to reorganise and also nominate some space. You do need to allow enough space for the data base to grow (physical allocation of disk, and then starts to use that space with data activity).

The logs will shrink very quickly. The data files will take considerably longer.

2005 and 2008 rely on checkpoint processing after backups to truncate those logs :

Except when delayed for some reason, log truncation occurs automatically as follows:
1) Under the simple recovery model, after a checkpoint.
2) Under the full recovery model or bulk-logged recovery model, after a log backup, if a checkpoint has occurred since the previous backup.

A checkpoint is essential for truncating the log under the full recovery - there might still be other factors that prevent it from shrinking. If needed, explicitly invoke a checkpoint after the backup.

You can check via:  select log_reuse_wait, log_reuse_wait_desc, * from sys.databases  -- looking for 'checkpoint' in log_reuse_wait_desc.

In FULL recovery, the inactive part of the log cannot be truncated until all its log records have been captured in a log backup. So a moot point anyway.

It is extremely important to run those backups, and if full recovery model, then, run the transaction log backups. Say every 15, 30, 60 minutes or whatever the granularity of recovery is required in having a full recovery dayabaste - or - whatever the business is able to manually recover from (ie re-key the day's work).

In critical situations, you can use SSMS manually / interactively to backup and shrink logs, and have a look at the values being reported before and after doing it.
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24404332
>> regular and frequent intervals - most common are multiples of
>> 15 minutes (ie every quarter hour, half hour, hourly etc).

Mark,

Very good write up. Maybe you should make an article out of this.  The only thing I want to add is that all  of this is determined by your tolerance for data loss.

If you are an Amazon or ebay and want to be able to recover every transaction you have a different needs and requirements than say a 3 man car repair shop that might have 50 customers on any given day. No matter which side -- data loss sucks -- but if only have to redo the 10 entries from yesterday afternoon it is a much more tolerable situation.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24405126
Thanks jimpen - was thinking that there should be an article on all this - might try to get in first :)

That is an extremely important point - about data loss - and needs to be incorporated into a DR plan.

The frequency with which backups are taken need to reflect the Business's ability to cope with data loss. Regardless of recovery model. The big attraction with full recovery is that it records the transactions in the transaction log which can be applied to the last full backup, and the greater the frequency, the fewer the transactions (relatively speaking) and so the less resources required, and the greater liklihood of recovery to that point in time prior to failure.

Having said that, we had one company in EE who decided that the "ease of managment" in running a full backup of their Simple recovery database starting from 8:00am and then every two hours worked perfectly for them - mind you it was not a huge database, and they processed a lot of parchments so recovery simply meant re-keying information. Bit different if everything is electronic and you lose access to the source information.

But then the DR plan gets into some wide reaching considerations. And believe it or not, with the higher reliability rates, the more important the DR plan. Lets suffice it to say, that you may need to restore your database at some point in time, for whatever reason (hardware failure, catastrophic failure, database corruption). IThere is a good article at : http://msdn.microsoft.com/en-us/library/ms178094(SQL.90).aspx

Bottom line is, when restoring your database, what do you want to see back in there ?

0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24409146
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24411322
Pretty good -- only suggestions:

First para newcomer one word.

And you mention SSMS about the 7th para but don't expand it until farther down.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24411343
Ta, about to update with 2nd draft will take care of those things...
0
 
LVL 1

Author Comment

by:j1akey
ID: 24413378
Thanks for the information everyone.  Sorry wasn't able to get back to you during the weekend.

So to answer a few of the questions:

1.  Running SQL 2000, I believe it's standard

2.  The database only backs itself up once a day at 2am.  That's the self-backup to the local drive.  It doesn't do anything with logs during the day.

Here's the backup schedule we currently have on here going to the backup server.

Full - Daily
Differential - 6 Hours
Log - 2 Hours

What I'd like to do is this if possible

Full - Weekly
Differential - Daily
Log - 2 Hours

Right now I'm only able to retain data for a couple of days because of the full daily backups to the backup server because the databases nightly self backups are interfering with the the backup server to do its job properly.

I've been doing some thinking over the weekend and came up with a few things.

1.  Jack the retention rate up on the backup server so it keeps 2 weeks worth of full backups but this would eat up alot of space.  This is probably the best option.

2.  Don't backup the database but backup the backup that it makes of itself but this then gets rid of log files and I could then lose up to a days worth of data.  The name of the self backup also changes on a daily basis so this would require more manintenance to make sure they don't eventually take up all available space.

3.  Edit the self backup job so it doesn't truncate the log files which makes the backup server happy.  If I read mark wills write up correctly then the part of the job that dumps the log isn't really needed?  So what I could do is this?

EXECUTE master..xp_sqlmaint '-D PracticeManager -WriteHistory -BkUpOnlyIfClean -CkDB -BkUpMedia DISK -BkUpDB -UseDefDir -DelBkUps 1DAYS -BkExt "BAK" -VrfyBackup -UpdOptiStats 15'

exec gJobHistory 'Backup - PracticeManager'

And just remove the "DUMP TRANSACTION PracticeManager WITH TRUNCATE_ONLY" line entirely?

When the application was installed on the server it added the database with these jobs automatically but from what I understand it sounds like some of it isn't needed when another way of backing up the database device is in play?

Like I said not much of a SQL guy so sorry if I'm asking the same questions again but I hope some of this clears things up.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24413444
Updated : http://www.experts-exchange.com/articles/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/Managing-the-Transaction-Log-for-the-Accidental-DBA.html

and don't forget to answer "was this article helpful" with a resounding 'yes' :)

And, thanks very much for your feedback jimpen.

@j1akey: will answer those questions for you, though, have a quick read of that article. The philosophies also apply to SQL 2000.

0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24413551

2.  The database only backs itself up once a day at 2am.  That's the self-backup to the local drive.  It doesn't do anything with logs during the day.
>> It doesn't do anything with logs during the day >> that is part of the problem you need to run the transaction log backups on a FULL recovery model

Right now I'm only able to retain data for a couple of days because of the full daily backups to the backup server because the databases nightly self backups are interfering with the the backup server to do its job properly.
>> only need a couple of instances of backup on local disk. along with the transaction logs.

1.  Jack the retention rate up on the backup server so it keeps 2 weeks worth of full backups but this would eat up alot of space.  This is probably the best option.
>> once moved to a removable media / repository doesn't have to be kept locally.

2.  Don't backup the database but backup the backup that it makes of itself but this then gets rid of log files and I could then lose up to a days worth of data.  The name of the self backup also changes on a daily basis so this would require more manintenance to make sure they don't eventually take up all available space.
>> Yes, the "image" backup might be useful if restoring a full machine, but really it is the SQL backup version that you want to keep, along with the log files.
>> the maintenance plan in SQL 2000 can manage the number of copies or number of days to keep the files on disk. It will purge old copies after the new one is created. Your physical backup will take care of historical copies...

3.  Edit the self backup job so it doesn't truncate the log files which makes the backup server happy.  If I read mark wills write up correctly then the part of the job that dumps the log isn't really needed?  
>> Use maintenance plans - they are in SQL 2000 as well to do the work for you. It can name the files, and clean up at the same time.
>> regularly run the transaction log backup
>> I do not think you need the differential backups.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24413622
ummmm... we might need to further clarify a couple of those points... Like which back up you are referring to - the SQL or the Physical ?
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24413887
All SQL Maintenance plans created with the wizard are built off the doing a time stamp to differentiate when it was created. SQL 2000 didn't/doesn't really support differentials as part of the plan.

Now, if you can stand losing a days worth of data -- then Simple recovery might be the way to go. In SQL 2000 they didn't really call it Simple or Full. It was whether the Select Into / BulkCopy and Truncate Log on Checkpoint were set on or off.

The way to switch to simple recovery the query analyzer is below.

I would suggest nightly full backups, and have your backup server copy from the local location instead of going to the database. Added to that -- I haven't found a backup agent from any backup application that I truly trust to actually get a live DB.

The other thing to do is roll your own backup jobs if you want to shorten the retention of the files or keep names consistent. The last couple of lines in the code snippet window can be just entered in a SQL Agent job that you manually create. Call it Monday Backup and schedule it to run once a week. Then edit and create for Tuesdays and so on. From there your backup server can just pick the same name once a week.

SQL Server 2000 Backup and Restore -- Recovery Models
http://technet.microsoft.com/en-us/library/cc966495.aspx#E0VB0AA

Selecting a Recovery Model
http://msdn.microsoft.com/en-us/library/aa173531.aspx

Switching Recovery Models
http://msdn.microsoft.com/en-us/library/aa196649(SQL.80).aspx#

sp_dboption
http://msdn.microsoft.com/en-us/library/aa933268(SQL.80).aspx

BACKUP DATABASE
http://msdn.microsoft.com/en-us/library/aa225964(SQL.80).aspx
exec sp_dboption @dbname = 'MyDBName',

     @optname = 'trunc. log on chkpt.',

     @optvalue = 'true'

GO

exec sp_dboption @dbname = 'MyDBName',

     @optname = 'select into/bulkcopy',

     @optvalue = 'true'

GO

RECONFIGURE WITH OVERRIDE

GO

----------------------------------------------

BACKUP DATABASE [MyDbName] 

	TO DISK = 'E:\MSSQL\BACKUP\MyDbName\MyDbName_Monday.BAK'

 WITH STATS = 5, INIT, RETAINDAYS = 5, FORMAT

GO

Open in new window

0
Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

 
LVL 1

Author Comment

by:j1akey
ID: 24414057
Good article, gave it a big thumbs up for ya!

Physical backup:
Full - Daily
Differential - Every 6 hours
Log - Every 2 hours

The physical backup to the backup server doesn't have the option to dump differentials so I'm pretty much stuck with them in the rotation.

SQL Backup:
Nightly Full to with the transaction log dump

Basically the SQL backup with the transaction log dump prevents the physical backup to the backup server from happening unless a full is also done there every day.

I was going to create a new SQL backup job that does the exact same thing minum the log dump but I wasn't sure how/if that would affect anything being the accidental dba like you mentioned in your article.

I feel like the SQL backup is fairly redundant given that the DB gets backed up to another server anyway but if the SQL doesn't do its nightly backup then users get messages that the backup hasn't run whenever they login to the application even though it is being backed up to the other server.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24414070
Ummm... jimpen, they did have Simple and Full in 2000 - check out : http://technet.microsoft.com/en-us/library/cc966495.aspx#E0UB0AA

and in enterprise manager : http://msdn.microsoft.com/en-au/library/aa176776(SQL.80).aspx

true, the combined setting at a code level is per 'trunc. log on chkpt.'  and 'select into/bulkcopy'  (both false = full,  both true = simple)

agree whole heartedly about backup programs - they tend not to do the right thing with DB's
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24414359
Thanks for the thumbs up :)

>> SQL backup is fairly redundant <<
No, Never think that !!

Think there is more emphasis on the physical backup than the database backup.

If you are running FULL, then you really have to run a transaction log backup using SQL .
If you can cope with having to go to last night's backup, then change it to SIMPLE and not worry about transaction logs

Jimpen gave you a good link on the various backups - I happened to post the same one afterwards...  Jimpens code also shows how to set to simple (using T-SQL).

But set it up in a maintenance plan so it is being managed for you - that is the SQL backup command :

To start the Database Maintenance Plan Wizard get into Enterprise Manager, Expand a server group, and then expand your server. On the Tools menu, click Database Maintenance Planner.  The Wizard should then pop-up... see attached...


 

Complete the steps in the wizard.




Automating-Database-Backups.doc
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24414380
>> Ummm... jimpen, they did have Simple and Full in 2000 ...

I have problems remembering the 2000 GUI. Was it a drop down like 2005? I did/do 75% from a QA window. The SQL GUI has always seemed slow and bulky to me.

I generally always just pop to the QA window and do ALTER DATABASE DBNAme SET RECOVERY SIMPLE now.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24414762
Why do a lot of SQL guys not like the UI ? It really is amazing !

Admittedly, I probably do 75% - maybe (a lot) more - in code. But don't tell anyone.
0
 
LVL 38

Expert Comment

by:Jim P.
ID: 24414911
>> Why do a lot of SQL guys not like the UI?

Well, I started out in DOS/CLI days, and have sort of carried through. And I was more Oracle when I got to RDBMs. Plus, a lot of functionality -- especially SQL -- comes from knowing how to do it in the QA -- the GUI just doesn't get it done the same way.
0
 
LVL 1

Author Comment

by:j1akey
ID: 24414960
I have another question to ask because I think I figured a way around my problem.

What happens if I run the DUMP TRANSACTION dbname WITH TRUNCATE_ONLY once a week or remove it from the nightly SQL backup job?  Does the transaction log just keep growing in size until the command get executed?

What I'm thinking is I'll let the db do the nightly SQL backup and only execute the transaction dump once a week on Saturday night night or something.  That way the backup server can do it's thing all week and be happy and not stop backing up the database because it's missing the archive bit from the log after the DUMP command is run every night.

Saturday night execute the DUMP TRANSACTION and on Sunday do another full backup on the backup server because no one is working on the weekend anyway so it doesn't matter if nothing is backed up betweent saturday night and sunday morning.
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24415075
you could, I wouldn't, and yes it will keep growing, and if there are uncommited or orphan transaction, it can mean that the transaction log needs to be traversed then it will try to go through a recovery stage next time you start it up and can take a long time, and you might not be able to truncate past the minlsn - so you end up with even a bigger problem...

How big are you files now ?

Why do you need to do DUMP TRANSACTION ?

What recovery model are you using ?

Why not backup the transaction Logs ?

How full are your files now ?
0
 
LVL 1

Author Comment

by:j1akey
ID: 24415555
How big are you files now ?  
PracticeManager_log.ldf   10,661,952 KB
PracticeManager.mdf    4,403,712 KB
PracticeManager.ldf   7,306,496 KB

Looks like 2 log files, don't know enough to know which is which.

Why do you need to do DUMP TRANSACTION ? The DUMP TRANSACTION is part of the maintenance that was setup by default by the application when it was installed on the server and the manufacturer likes it to be that way.  Other than that I have no idea. :)

What recovery model are you using ?  Right now it's set for bulk-logged.

Why not backup the transaction Logs ?  Changes to the transaction logs are being backed up externally every 2 hours to the backup server.  This is seperate from the SQL backup that executes the DUMP TRANSACTION command.

How full are your files now ?  They are set for unlimited growth at 10% intervals so as far as I know they won't get full unless we run out of drive space.

Thanks for all the help, learning alot here. :)
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24418544
OK, they are interesting stats... and not really looking healthy...  Open a query window point to the practicemanager database.

Lets run : DBCC SQLPERF(LOGSPACE)

You will see the difference between allocated size on disk and amount of space being consumed within that allocated size.

Then run : SELECT * FROM SYSFILES

bulk-logged is typically associated with data repositories where the main method of updates if mass inserts from another data source. It is not normally used with databases where people are keying information into. Can you tell us a little bit more about how it is used ?

Doing a physical backup (ie not SQL) of the logs isn't really going to help you a great deal in the event of a database recovery unless you treat it like a "snapshot". If you were to try to recover, I think you will find SQL being a bit upset about the log file (if it is really just being copied), and if not is going to take a very very long time.

So, there is some work to do. Good nes is that it can all be done for the better.... So, lets have a look at those two queries as a starting point.
0
 
LVL 1

Author Comment

by:j1akey
ID: 24423510
Here's a couple of screen shots from those queries, in reverse order of course. :)

The application that uses this database is basically an electronic medical record system and billing system all rolled into one.

People are entering patient info, scheduling, billing information and updating medical records in this thing all day long.

As far as the log goes when it is physically backed up it does get treated as a snapshot then get moved over to the backup server.
sql1.jpg
sql2.jpg
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24423706
You have got a heck of a lot of allocated disk size for not much log space (ie stuff inside that file).

It is less than 1 % full meaning it is megabytes full not gigabytes size.

Same on some of your other tables as well.

Basically whenever you see a log file bigger than your data file then you know automatically that you have a log management problem on your hands.

I am also curious why you have a second log file active.

You mentioned before that it was a third party app and the manufacturers / suppliers set it up with "bulk logged" recovery model. How much do they dictate to you what you need / have to do with the database ?  Are you permitted to make changes ?
0
 
LVL 1

Author Comment

by:j1akey
ID: 24423818
Well as far as what they dictate what we do with the database I suppose there really isn't any limitation on that outside of calling them for support and getting help if we have a problem with it.  I can make any changes I want as long as it still works. haha

I'm not really sure why things are setup the way they are.  The database was setup before we started doing work for this office since we're a 3rd party that manages their network for them.  At this point this database issue is the last thing on my list that I need to get worked out for them so I can say everything is running smoothly.
0
 
LVL 51

Accepted Solution

by:
Mark Wills earned 500 total points
ID: 24425262
If I were you, I would be "fixing" the database.

Have a look at the following example -

we build a database just for an example (in folder c:\backup so might like to change that, or, create it), we give two log files, we set some disk size, we drop one of the logs etc etc...
USE MASTER

GO
 

CREATE DATABASE MyDB

ON PRIMARY

 ( NAME='MyDB_Data',

   FILENAME='c:\Backup\MyDB.mdf',

   SIZE=5,

   MAXSIZE=100,

   FILEGROWTH=1)

LOG ON

 ( NAME='MyDB_log',

   FILENAME = 'c:\Backup\MyDB.ldf',

   SIZE=50,

   MAXSIZE=100,

   FILEGROWTH=1),

 ( NAME='MyDB_log2',

   FILENAME = 'c:\Backup\MyDB_log2.ldf',

   SIZE=1,

   MAXSIZE=100,

   FILEGROWTH=1)

GO
 

USE MYDB

GO
 

BACKUP DATABASE MyDB TO DISK='C:\Backup\MyDB.BAK'

GO
 

BACKUP LOG MyDB TO DISK='C:\Backup\MyDB_log2.BAK'

GO
 

BACKUP LOG MyDB WITH TRUNCATE_ONLY

GO
 

DBCC SHRINKFILE ('MyDB_Log2', EMPTYFILE);

GO
 

ALTER DATABASE MyDB SET RECOVERY SIMPLE

GO
 

ALTER DATABASE Mydb REMOVE FILE MyDB_Log2

GO
 

ALTER DATABASE MyDB SET RECOVERY FULL

GO
 

DBCC SHRINKFILE ('MyDB_Log', 10);          -- now before you do this look at the size on disk, or, use DBCC SQLPERF(LOGSPACE)

GO
 
 

/* For You...

Backup your database, Backup your Log Files, Truncate your Log files.

Change the database recovery model to simple (properties menu).

Delete from the properties menu again on the transaction log tab the second file.

change you database recovery model back to full.

change the initial allocation to be 1 gig (say 1 quarter of your database)

*/
 

-- 

use master

go
 

drop database mydb

go

Open in new window

0
 
LVL 1

Author Comment

by:j1akey
ID: 24426674
Cool man, I really appreciate the help with this.  If I could give you a million points I would. :)
0
 
LVL 51

Expert Comment

by:Mark Wills
ID: 24427463
Thanks very much - very happy to help, especially with that last comment - arguably better than a million :)

One last thing.... If you are leaving it at FULL, then please schedule some transaction log backups using SQL during the day - maybe 10:30am, 12:30pm, 2:30pm, 4:30pm the reason for the half hour is to avoid any of those other disk backup functions.

Then please make sure those SQL generated backup files (including the FULL) are backed up to disk. If you do that, then you can probably remove the SQL databases (ie the .mdf and .ldf) from the disk backup during the day and avoid a lot of contention. Keep the full disk image backup happening each night - that is the best to re-image a machine, but the best to restore a database is using the SQL generated backups.

Also, have a look at your other databases - there is another pretty big log files around that might need some attention (ie the FDB database)...

Cheers, and happy SQL'ing
0

Featured Post

Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

Join & Write a Comment

Confronted with some SQL you don't know can be a daunting task. It can be even more daunting if that SQL carries some of the old secret codes used in the Ye Olde query syntax, such as: (+)     as used in Oracle;     *=     =*    as used in Sybase …
Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
Via a live example, show how to extract insert data into a SQL Server database table using the Import/Export option and Bulk Insert.
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.

758 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

19 Experts available now in Live!

Get 1:1 Help Now