?
Solved

The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

Posted on 2014-04-01
4
Medium Priority
?
3,341 Views
Last Modified: 2014-04-04
Trying to delete 300 million records from a table that has 500 million records.
After 2 hours I get the following

Any ideas?  thx

The transaction log for database 'tempdb' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases
0
Comment
Question by:JElster
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
4 Comments
 
LVL 52

Assisted Solution

by:Carl Tawn
Carl Tawn earned 1000 total points
ID: 39969304
I imagine you have simply run out of space in the tempdb log file. You'll have to check how big the log file is and either allow it more space (if disk capacity allows), or switch to deleting in batches, rather than attempting to delete the whole 300 million rows in one go.

Also, if you run the following statement, it should confirm what the issue is:
select log_reuse_wait_desc from sys.databases where database_id = db_id('tempdb')

Open in new window

0
 
LVL 69

Accepted Solution

by:
Scott Pletcher earned 1000 total points
ID: 39969353
>> Trying to delete 300 million records from a table that has 500 million records. <<

Is that table in tempdb or another db?

If it's in another db, I suspect that snapshot isolation of some type is on for that table, causing SQL to have to keep versions of the deleted rows in tempdb.


Deleting 300M rows all in one shot is not normally a good idea anyway -- way too large for a single transaction, esp. if heaven-forbid that sucker needs to roll back.

If at all possible, try deleting in batches, say 100K at a time.  Add a 1/3 or 1/2 second delay (WAITFOR DELAY) between batches if you can afford the time.
0
 
LVL 1

Author Comment

by:JElster
ID: 39969376
Another db
0
 
LVL 75

Expert Comment

by:Anthony Perkins
ID: 39977986
In addition to the comments about doing this in batches (I would recommend not larger than 500K rows) I would also make sure that you are doing frequent Transaction Log backups if the Recovery Model for this database is Full.
0

Featured Post

Veeam Task Manager for Hyper-V

Task Manager for Hyper-V provides critical information that allows you to monitor Hyper-V performance by displaying real-time views of CPU and memory at the individual VM-level, so you can quickly identify which VMs are using host resources.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article I will describe the Backup & Restore method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
Michael from AdRem Software explains how to view the most utilized and worst performing nodes in your network, by accessing the Top Charts view in NetCrunch network monitor (https://www.adremsoft.com/). Top Charts is a view in which you can set seve…
In this video, Percona Solution Engineer Dimitri Vanoverbeke discusses why you want to use at least three nodes in a database cluster. To discuss how Percona Consulting can help with your design and architecture needs for your database and infras…

800 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question