Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

Unexpected growth of Log(.ldf file)(Deleting 500MB table is creating 10GB of log)

Posted on 2011-03-18
6
Medium Priority
?
362 Views
Last Modified: 2012-05-11
I am trying to detele a table of size 500Mb and it is creating a log file of 10GB. If I try to delete 3-4 tables of that size simultaneously then it is overflowing my tempdb drive. I know deleting in small chunks is a better way to do it but i cannot implement it in my scenario.

Is there any proper explaination of why deleting 500MB table is creating 10GB of log?

0
Comment
Question by:dbaner2
6 Comments
 
LVL 30

Expert Comment

by:Randy Downs
ID: 35166907
Maybe this will help - http://stackoverflow.com/questions/571750/make-sql-server-faster-at-manipulating-data-turn-off-transaction-logging

"configure the database (each database on a server can be different) for simple backups the log file won't grow until you back it up. This is done by setting the recovery mode to "simple".

With simple backups the log is only used to hold the state of transactions until they are fully written into the main database.
"
0
 
LVL 60

Expert Comment

by:Kevin Cross
ID: 35166938
Are you deleting everything in the tables?  Use TRUNCATE instead or you can try to break up the deletes.
0
 
LVL 14

Expert Comment

by:Daniel_PL
ID: 35166973
Are you completely deleting data from tables?
If yes (and table(s) don't have foreign key(s)) you can truncate table(s). When you need part of data from table(s) (basically smaller than deleted part) you can first insert data you want to persist, truncate table and get your data back.
0
 [eBook] Windows Nano Server

Download this FREE eBook and learn all you need to get started with Windows Nano Server, including deployment options, remote management
and troubleshooting tips and tricks

 

Author Comment

by:dbaner2
ID: 35166975
I understand that part. But I am more interested to know:

Why deleting 500MB table is creating 10GB of log? What is SQL Server writing into the log file so much more than the data itself?

0
 
LVL 30

Accepted Solution

by:
Randy Downs earned 1000 total points
ID: 35167019
The log is probably documenting each delete row at a time. Still 10 G seems excessive
0
 
LVL 14

Assisted Solution

by:Daniel_PL
Daniel_PL earned 1000 total points
ID: 35167033
This is by design, delete is performed on row basis so each row needs to be fully logged with maintaining each log sequence number.
0

Featured Post

Veeam Disaster Recovery in Microsoft Azure

Veeam PN for Microsoft Azure is a FREE solution designed to simplify and automate the setup of a DR site in Microsoft Azure using lightweight software-defined networking. It reduces the complexity of VPN deployments and is designed for businesses of ALL sizes.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I have written a PowerShell script to "walk" the security structure of each SQL instance to find:         Each Login (Windows or SQL)             * Its Server Roles             * Every database to which the login is mapped             * The associated "Database User" for this …
Hi all, It is important and often overlooked to understand “Database properties”. Often we see questions about "log files" or "where is the database" and one of the easiest ways to get general information about your database is to use “Database p…
this video summaries big data hadoop online training demo (http://onlineitguru.com/big-data-hadoop-online-training-placement.html) , and covers basics in big data hadoop .
Is your data getting by on basic protection measures? In today’s climate of debilitating malware and ransomware—like WannaCry—that may not be enough. You need to establish more than basics, like a recovery plan that protects both data and endpoints.…

886 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question