Doing a Delete From tablename command and getting error

Greetings Experts -

I was doing some development testing with data and as we were going through and cleaning up the tables in our database we got the following error below - please advise.

Msg 9002, Level 17, State 2, Line 3

The transaction log for database 'Databasename' is full due to 'LOG_BACKUP'.
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

PadawanDBAOperational DBACommented:
A couple cases could cause this:

Is autogrowth enabled for the transaction log?  If not, the transaction log is full and can't mark entries as available for re-use until the transaction log is backed up.
If autogrowth is enabled, check to see if it is capped at a certain size and compare that against the current size of the transaction log file.  If it has reached a capped amount, the same reason as above.
If autogrowth is enabled and the above isn't the case, check to see if the the drive housing the transaction log file is full - which would essentially mean that the transaction log can't grow and so it errors out.

The fix for all of the possible scenarios I have mentioned is to back up the transaction log to mark VLFs as available for re-use and have the happy consequence of giving you room *inside* of the transaction log =)  

Another possibility you may run into if you're deleting a considerable amount of data is there not being enough room on the disk to allow the transaction log to grow enough for your operation.  In which case you could batch through your delete operations and take transaction log backups after each batch.

Edit: Oh.  And the main reason for this behavior is the database is likely in full recovery model (i'm not 100% sure, but pretty sure you get a different error if the log fills and you're in simple recovery).  This enables point-in-time restore capabilities, but also means that transaction log entries are held until a transaction log backup occurs.  Simple recovery is an option if you don't need point-in-time recovery capabilities and want the VLFs available for re-use once all transactions within that VLF are completed.
Daniel ReynoldsSoftware Applications Developer / IntegratorCommented:
If this is a dev database, I would consider just setting the database recovery model to simple so that all of your deletes and inserts, etc are not logged. This essentially gets rid of this error. (I would NOT recommend that for a production db).
sj77Author Commented:
Thanks for the feedback fellas.

Great points PadawanDBA:

To answer your scenarios, the properties were originally set for logging for allowing for growth but there was a limit constraint. I checked it so it allowed unlimited, closed out the DB, retried again and nada. :(

Secondly, yes, there is alot of data now that I think of it, trying to be deleted. There are 3 or 4 records with over 100k records in them, what would you advise to do here?

Secondly, the C:\ drive is showing only about 6.4MB free. Would increasing the VM storage space help? Or should I make a backup, compress the file, then delete?

Please advise.
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

PadawanDBAOperational DBACommented:
If you don't care about having a backup of the log, you can switch the databse over to simple recovery model and then do a one-time shrink of the transaction log and re-grow it to an appropriate value.  If you *do* care about having a backup of the log (this is the safer route to go), do a transaction log backup to a network location and then shrink the transaction log and re-grow it to an appropriate value - and you'll also want to configure regular transaction log backups as well.  If you need any help with any of that just shout! (you seem familiar enough with general administration, so I didn't want to needlessly go over the details) =)

Edit: i'm throwing in the shrink operation because you can't be running around with 6.4MB free on your system partition!
sj77Author Commented:
I am learning the ropes. JR DBA here.

I work for a small sized company so learning as I go. I have honestly never done it before, but, I am sure the internet of things can teach me.

If you have some basic guidelines to go by that are easy to jot down - anything would be greatly appreciated. :)
PadawanDBAOperational DBACommented:

So you can change the recovery model one of two ways:

1) Using SSMS - Right click the database > Properties > Options > Change Recovery model to Simple > OK
2) Using T-SQL -
use master

alter database <dbNameHere> set recovery SIMPLE

Open in new window

Shrinking a data file can be done the same two ways:

1) SSMS - Right click the database > Tasks > Shrink > Files > Change the File Type option to Log > Check Available free space to make sure it is showing a high percentage (essentially you're verifying that there is free space in the log making the shrink worthwhile) > Ensure Release unused space option is selected > OK

2) T-SQL - You'll need the logical name of the file you want to shrink:
use master

	db_name( database_id ) = 'DBATools' and 
	type_desc = 'LOG'

Open in new window

You'll then use the logical name (not the physical name) to shrink the file:
dbcc shrinkfile( <logicalNameHere>, 1 )

Open in new window

After that you can go in and grow the transaction log file to be a more appropriate value for the database (a good starting point if you have no other guidance is 20-25% of the database size).

Easiest method for this is to just use SSMS - Right click the database > Properties > Files > Change the Initial Size for the Transaction log file to whatever value (in MB) you land on.

T-SQL can of course still be used:
use master

alter database <dbNameHere> modify file( name = N'<logicalTLOGNameHere>', size = <whateverNumberHere>MB )

Open in new window

I used MB in the above query, but valid units are KB, MB, GB and TB.  Does that give you enough to go on?

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
sj77Author Commented:
Padawan -

Thank you for the information. I tried it on my end and was getting function not defined issues. I dug around a bit and found this worked.

Thanks for steering me in the right direction.


Open in new window

Vitor MontalvãoMSSQL Senior EngineerCommented:
Also is not a good idea to have data files in C: drive and the main reason is what you're facing (database grow and no space left on disk).
sj77Author Commented:
I've requested that this question be deleted for the following reason:

PadawanDBAOperational DBACommented:
Shouldn't points be awarded for this, not delete the question?
sj77Author Commented:
I've requested that this question be deleted for the following reason:

this was resolved
sj77Author Commented:
This solved it!!!!
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.