Solved

Changing recovery model from Full to Simple

Posted on 2006-07-12
8
1,205 Views
Last Modified: 2008-01-09
Got a Raiser's Edge database.  Sits on SQL Server 2k.  It installs by default in Simple recovery, and the company doesn't support it going to Full.  But I thought it would be a good idea to get those log backups going so switched it to full and set up full/differential/log backups.  Today got reports that the db was going v slowly.  Discovered that the log file was growing rapidly, presumably the slowness coming from the log recreating itself every time it got 10% bigger.  It's up to 3 gig now.  So I made an extra log backup, and stopped and restarted the server but the log is still at 3 gig.  

Decided all I could do was set the db back to Simple recovery until I can find out what the problem is.  

Questions:

1. The log file is still 3 gig - is that to be expected after going back to Simple?
2. I thought stopping/starting the server emptied the log?  Or did I dream that?
3. Any ideas what could have caused this?
4. Is there anything about going from Full to Simple that could cause any problems (beyond not being able to do log backups)?
0
Comment
Question by:looper8
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
8 Comments
 
LVL 8

Expert Comment

by:Kobe_Lenjou
ID: 17089661
1) Yes
2) It was a dream, SQL server recreates the tempdb database on startup
3) Heavy transactional use and slow disk IO
4) No

Solve this by doing a DBCC shrinkfile or DBCC shrinkdatabase after changing the recovert model to simple
0
 
LVL 28

Expert Comment

by:imran_fast
ID: 17089741
1.The log file is still 3 gig - is that to be expected after going back to Simple?
Yes. what i usally is to shrink the file after taking the backup
run this statement in sql query analyzer
dump transaction dbname with no_log
go
then shrink the log file


2. I thought stopping/starting the server emptied the log?  Or did I dream that?
no, it will not happen.

3. Any ideas what could have caused this?
heavy transactions

4. Is there anything about going from Full to Simple that could cause any problems (beyond not being able to do log backups)?
it will not log every thing.


When FULL is specified, database backups and transaction log backups are used to provide full recoverability from media failure. All operations, including bulk operations such as SELECT INTO, CREATE INDEX, and bulk loading data, are fully logged.

When SIMPLE is specified, the database can be recovered only to the last full database backup or last differential backup.
0
 
LVL 69

Expert Comment

by:Scott Pletcher
ID: 17090435
How much data is being bulk-loaded into the db?  A good compromise might be "BULK_LOGGED" recovery mode.

Another thing is to *pre-allocate* a relatively large amount of space to the log so that it does not have to dynamically expand very often, *because dynamic expansion is a slow process*.

Also, set the increment for the log to a specific amount rather than 10% -- I've seen several cases where that alone sped up expansion a lot (I guess calc'ing the 10% causes slowdowns itself).

So, shrink the current log to, say, 400M with an increment of 100M, like so:
[All commands are from Query Analyzer.]

USE databaseName
EXEC sp_helpfile
--copy/remember the logical log name in the first column:
--    you will need it for the other commands

BACKUP LOG databaseName WITH TRUNCATE_ONLY

DBCC SHRINKFILE ( logical_log_name, 400 )

ALTER DATABASE databaseName
    MODIFY FILE ( NAME = logical_log_name, FILEGROWTH = 100MB )

--optional, if you want to try BULK_LOGGED mode
ALTER DATABASE databaseName
    SET RECOVERY BULK_LOGGED
0
Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 1

Author Comment

by:looper8
ID: 17092149
Thanks guys.  Ah yes, now I remember the dream ...

Thanks for the pre-allocation bit Scott, that sounds eminently sensible.  

Not quite sure of the order to proceed however.  Can I go back to full recovery, then take a backup, then truncate the log, then change the initial log size/increment?

I think I need to ask more questions about what was being done at the time, because as far as I know no bulk imports are done.  However if I go back to Full recovery and this situation recurs and it isn't a bulk operation then do I need to truncate the log more frequently?
0
 
LVL 69

Expert Comment

by:Scott Pletcher
ID: 17093021
You don't need to take a log backup after going back to FULL from SIMPLE; the log has already been automatically truncated by SQL in SIMPLE so there's not much point in backing it up.

If you want to be able to apply logs / forward recover in the future, you will need to take a full db backup after going back to FULL mode.

If you want, you can TRUNCATE the log as often as needed, although doing that prevents doing a forward recovery.
0
 
LVL 1

Author Comment

by:looper8
ID: 17097550
Thanks.  Sorry to be asking lots of basic questions, but I need to get this right!

Am I right to say that log truncation is part of normal log backing up?  If so then I shouldn't ever need to truncate the log if I've got log backups going?  

All in all this is what I think I'll do (can you please comment!!) : go back to Full reocvery, set up the Full/Diff/Log backups again (perhaps with more frequent log backups), check out if Bulk-Logged is ever necessary, shrink the log file, set the log file allocated space to larger than normal (say 500MB) with a growth of say 100MB.

Many thanks.
0
 
LVL 69

Accepted Solution

by:
Scott Pletcher earned 500 total points
ID: 17109528
>> Am I right to say that log truncation is part of normal log backing up? <<

Yes; normally after backing up the log SQL truncates it.


>> If so then I shouldn't ever need to truncate the log if I've got log backups going? <<

In theory, yes; in practice, no.  SQL doesn't always function as it should, especially after large inserts.


The rest seems like a good approach.  However, you will still need to periodically check the total log space allocated.  You could set up a job to check this if you wanted to.
0
 
LVL 1

Author Comment

by:looper8
ID: 17120592
Thanks Scott.
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Why is this different from all of the other step by step guides?  Because I make a living as a DBA and not as a writer and I lived through this experience. Defining the name: When I talk to people they say different names on this subject stuff l…
Recently we ran in to an issue while running some SQL jobs where we were trying to process the cubes.  We got an error saying failure stating 'NT SERVICE\SQLSERVERAGENT does not have access to Analysis Services. So this is a way to automate that wit…
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.
Viewers will learn how to use the SELECT statement in SQL and will be exposed to the many uses the SELECT statement has.

717 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question