MS SQL Full Disk, transaction log does not shrink

Posted on 2009-04-28
Medium Priority
Last Modified: 2012-06-27

I've a very strange problem.

The situation on monday morning:
Database-server with database-file of +/- 48 GB. Transaction log is +/- 38 GB. Recovery mode is full, nightly backup.
One of the tables includes 7 million records.

I wanted to delete 5 million records to clean up some disk space and make the database faster again.

After two hours of running the delete-query, the server returned the error "transactionlog is full" because disk is full. De transactionlog was grown to more than 45 GB.
The database turned itself into "in recovery" modus. After some minutes, the database was online again, but disk is still full.
I've done a shrink of the database but nothing happens! (transactionlog still too large).
After that, I've done a backup because I've read the transaction log will become smaller after a backup.
No result, still big transaction log and database.

I've shrinked another database on the server, which gave me 5GB workspace on the disk again. Now I've deleted some 100.000 records per 5.000 records, with a shrink between every +/- 100.000 records. Result: database is some 100MB's smaller, but transaction log keeps big (or even larger).
After some 100.000 records, the transaction log explodes again and disk is full again.


- Why doesn't shrink the transaction log after a "shrink" or "backup"?
- How do I delete my 5 million of records without any problem? (to make disk space again free)
- Is it a good idea to convert the recovery model from "full" to "simple"? Will the transactionlog of 43GB be deleted, or be a lot smaller?
- There are a lot of indexes on the table. Is het required to rebuild indexes before transaction log will become smaller?
- ...

Any advice will be appriciated!

Thanks a lot!

Question by:corcon
LVL 12

Assisted Solution

GuitarRich earned 400 total points
ID: 24248980
Whenever I've had a transaction log that will not shrink with either a shrink or backup I find that this combination nearly always shrinks the log to its smallest size. Just beware that if you require the log entries you have a back up before doing this:

use <dbname> 
backup log <dbname> with no_log 
dbcc shrinkfile (...) 
backup log <dbname> with no_log 

Open in new window

LVL 12

Assisted Solution

by:Chris M
Chris M earned 400 total points
ID: 24248985
You should realise that when the transaction log is full, you can't perform any more transactions because the DB will try to log what you're doing but it's got no more space.
You can handle this without deleting data first.
Change the database recovery mode from FULL to simple, the do a "backup log [dbname] with truncate_only" (without quotes).
Then run a "DBCC SHRINKFILE(filename, 1);" where filename is the name of your log file. (Use sp_helpfile" to get the file names.
This will free the space of the log and you will be able to work again.

Assisted Solution

rslangen earned 400 total points
ID: 24248987
We had this problem some while a go. After changing the log to simple and doing a shrink the size became normal again. You can change it back to full after that if needed.
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.


Assisted Solution

SyferB earned 400 total points
ID: 24249018
You need to truncate the transaction log
Run the following the code

LVL 31

Assisted Solution

RiteshShah earned 400 total points
ID: 24249209

Accepted Solution

corcon earned 0 total points
ID: 24250036
Thanks to all.

I've changed my database to simple and shrinked the log file, which is now 1MB. Cool!

Now I can delete my records.

I will assign the points to you all.

LVL 31

Expert Comment

ID: 24250061
yes, you can delete but do keep watch on log file and delete your records in small chunk

Expert Comment

ID: 24696409
I know this is a closed question but I had same issue and could not find a good solution. This one works Great and it works everytime. The log file will be shrunk to 1024k and by using the sp_helpfile it gives you a nice result in the result pane with. You could also put this into a stored procedure and setup an maintenance plan to run it at a given interval.

name      fileid      filename      filegroup      size      maxsize      growth      usage

use databasename
exec sp_helpfile
Backup LOG databasename with truncate_only
dbcc shrinkfile (logfilename, 1)

LVL 12

Expert Comment

by:Chris M
ID: 35188871
Good to know you're happy.
This is just to caution you: if you do a Backup LOG db_name with truncate_only;, make sure that you do a full backup of your database otherwise you will not be able to do a point-in-time recovery or use subsequent log files to go forward when recovering after a truncating your log files.

A differential backup right after truncating your logs also suffices .

Good luck.
You may reduce the rate of growth of your log files by changing the recovery mode from Full to Bulk-logged just before doing either Integrity checks (checking data & index linkage) or before re-index & re-org exercises.  Remember to change the recovery model back to full right after these maintenance exercises.

By changing the recovery model to bulk-logged, the server will ignore logging bulky operations in the logs, hence reducing the growth of log files greatly during reindex/reorg/integrity check exercises.

All the best & regards,

Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Lotus Notes has been used since a very long time as an e-mail client and is very popular because of it's unmatched security. In this article we are going to learn about  RRV Bucket corruption and understand various methods to Fix "RRV Bucket Corrupt…
One of the most important things in an application is the query performance. This article intends to give you good tips to improve the performance of your queries.
In this video, Percona Solution Engineer Rick Golba discuss how (and why) you implement high availability in a database environment. To discuss how Percona Consulting can help with your design and architecture needs for your database and infrastr…
This lesson discusses how to use a Mainform + Subforms in Microsoft Access to find and enter data for payments on orders. The sample data comes from a custom shop that builds and sells movable storage structures that are delivered to your property. …

862 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question