Avatar of millerpeter
 asked on

Transaction Log full, DBS in SIMPLE recovery, why does my purge routine fail on [SQLSTATE 42000] (Error 9002)

I have a 100 gig database and yesterday I did the following:
Changed recovery model from Full to SIMPLE.
Resize the Transactions log to 2 gig
Turned off the auto growth option on the log.
Disabled the backup transaction log jobs.
I have a purge routine created as a SP that runs 4 times a day and this SP reads and writes data to a history server and then purges the data from production.
This job was working until I did the changes above. Now I'm getting error

The transaction log for database 'ABC' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases [SQLSTATE 42000] (Error 9002).  The step failed.

I've check log_reuse_wait and log_reuse_wait_desc and I see "0" and "NOTHING"

Have I caused myself big problems?
Any assistance would be nice

Microsoft SQL ServerMicrosoft SQL Server 2005

Avatar of undefined
Last Comment

8/22/2022 - Mon

View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
Guy Hengel [angelIII / a3]

please check the output of DBCC OPENTRAN
if it shows "old" open transaction(s), you might for example have some replication or mirroing set up, broken, and hence holding the transactions in the log "forever".
fix that underlying problem, is applicable, and the error you see will be gone alos.

1. I ran the DBCC OPENTRAN and NO open transactions was the reply.
2. I chnaged the file option to allow auto growth.
The move/purge job executed successfully.
The transaction log increased from 1.3 gig to 5 gig.
Currently the log shows 99% full and I am concern that it will keep growing?
What if I do a backup log with truncate_only and then shrink it. This would clear out the data and start frresh.
I'm concerned about this run away log and getting a call at home tonight
because the disk is full.

It all depends on how typical the run that autogrew the log to 5GB was.  If you don't expect it to purge much more than that in any one run, then you might as well leave it on autogrow.

Alternatively, you could modify the purge/delete procedure to do it in chunks, so that when the checkpoint kicks in, it has something to remove from the log, instead of having to wait for the entire purge operation to finish (as it's one BIG transaction).  SET ROWCOUNT so that the loop deletes only that amount of rows each iteration and perform a loop that continues until rows affected = 0.
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck

Hmmm I'm not sure what the free space should show.  The virtual logs that make up a transaction log file are used in a round robin manner.  It could just be that you're currently in a virtual log at the end of the file.  Read this for more info:


Or maybe you just checked it before sql server did a checkpoint?  Maybe it takes a while to checkpoint such a large transaction?  

Regardless, your log should hit equilibrium eventually.  If not, than something is not working correctly.

I will monitor and keep my fingers cross over night and then get back to you.

I've got to ask too, on a 100 gig database, is it wise to use the simple recovery model?  Can you afford to lose data between full backups?  Such a large database should probably have the attention of an experienced DBA.  I'm not knocking you, I'm just trying to curb sql server's perception as a lesser database due to it's ease of entry.  See my post here for more of an explination:
Try out a week of full access for free.
Find out why thousands trust the EE community with their toughest problems.

Your concern is a very valid concern and I'm not offended by your question.. The data being collected is from our firewall and we are only keeping 90 days of data on file. It was managements decision to go this route even after I made it very clear that ALL data would be lost after the last backup should we have a failure. The 90 Days of data retention was also managements decision. Currently we are experiencing storage space issues and this being such a large database and the transaction logs are large they (management) are naturally looking at anything and everything to free up space.
Again thank you for your comments.

Ah ok.  As long as you're not computing billing or something from this data i can see how it's expendable.  Good luck.

thanks everything is looking ok.
All of life is about relationships, and EE has made a viirtual community a real community. It lifts everyone's boat
William Peck