Solved

How and Why does a Transaction Log grow ?

Posted on 2011-02-17
12
525 Views
Last Modified: 2012-05-11
Today the support team were performing month data upload(which was around 600 MB), and the log started growing indefinitely till 6GB(that was all available on the server)and hence tran log full, it has around 400 VLFs when checked, ....here comes my brightest BA who wants to know why is it taking 6GB and waiting to grow beyond. I have tried my best to convince him thats how the log works and its by design kinda things...thats as far as i know. telling him that we can only control the growth and cannot say it use this much only and complete the transaction within that space...

Guys...i wanna know how much log space does a unit of data consumes...say 1 unit is one MB
Here in our case,the DML op is an insert
0
Comment
Question by:anilkullam
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 4
  • 3
  • +1
12 Comments
 
LVL 8

Expert Comment

by:Toxacon
ID: 34918864
Basically, everything is written only to log. During full backup, the log should be dumped to the database. Incremental backup will backup only the log as nothing has changed in the DB itself. (Every change is in the Log only.)

You need to change the database recovery model to simple. Then do a full backup and the Transaction Log will be able to shrink.
0
 
LVL 15

Expert Comment

by:Aaron Shilo
ID: 34918878
hi

you cant traslate the size of the trn log to transaction size since every trn changes in size i.e.
the statement size and length of data.

whet you should do is use BULLKED LOGGED loads.

before you load data to your server change the database recovery parameter to BULK_LOGGED.

this should minimize log size by up to 90%
0
 
LVL 15

Expert Comment

by:Aaron Shilo
ID: 34918907
@Toxacon

changing the database recovery model will not minimize the log size.
you need to use bulked_logged

also there is no such thing as a "Incremental backup " in sql server but a "Differencial backup"
and this wouldnt help.
0
Ransomware: The New Cyber Threat & How to Stop It

This infographic explains ransomware, type of malware that blocks access to your files or your systems and holds them hostage until a ransom is paid. It also examines the different types of ransomware and explains what you can do to thwart this sinister online threat.  

 

Author Comment

by:anilkullam
ID: 34919008
Hi all, i knew i wud get this kind of replys....apart from controlling the log and how ....
my question is...what is the comparative growth of log in terms of data uploaded activity...
say 1 MB of data is inserted then what is the log growth...and what if data is 1 gb and more
0
 
LVL 8

Expert Comment

by:Toxacon
ID: 34919012
@ashilo

The recovery model will dictate the amount of data kept in the Log and therefore it will affect the size. If using Simple Model, the Log will be truncated at every checkpoint and will not be kept in the Log.

Bulk-Logged is a temporary recovery model for big data inserts. You can use that temporarily but Microsoft does not recommend it for continuous use.
0
 

Author Comment

by:anilkullam
ID: 34919055
I missed this info for you...the db is in simple recovery model.
0
 
LVL 8

Expert Comment

by:Toxacon
ID: 34919107
The data is arranged in blocks of 8kB on one page so big amounts of data practically consume as much disk space as they are of size. In addition to that, the indices, if any, will consume disk space.

When speaking of Transaction Log, every change will consume transaction metadata space along with the data itself.
0
 
LVL 15

Accepted Solution

by:
Aaron Shilo earned 250 total points
ID: 34919108
@ilkullam:

use this procedure to resulve the data / log ratio.

what i think you should do is create a new database and load a sample of data to it then use this proc to analyze the ratio.

CREATE PROCEDURE [MetaBOT].[usp_LogWatch] AS
DECLARE @SQL VARCHAR(5000)
--Clean up temp objects if not properly done so previously
IF EXISTS (SELECT NAME FROM tempdb..sysobjects WHERE NAME = '#usp_LogWatch_Results')
   BEGIN
       DROP TABLE #usp_LogWatch_Results
   END
--Create temporary table to store results
CREATE TABLE #usp_LogWatch_Results ([Database Name] sysname, [File Type] VARCHAR(4), [Total Size in Mb] INT)
--Create SQL script to run against all databases on the instance
SELECT @SQL =
'USE [?]
INSERT INTO #usp_LogWatch_Results([Database Name], [File Type], [Total Size in Mb])
SELECT DB_NAME(), [File Type] =
CASE type
WHEN 0 THEN ''Data'''
+
           'WHEN 1 THEN ''Log'''
+
       'END,
[Total Size in Mb] =
CASE ceiling([size]/128)
WHEN 0 THEN 1
ELSE ceiling([size]/128)
END
FROM sys.database_files
'
--Run the command against each database
EXEC sp_MSforeachdb @SQL
SELECT D.[Database Name], D.[Total Data File Size In Mb], L.[Total Log File Size In Mb],
        CAST(CAST(L.[Total Log File Size In Mb] AS decimal(8,1))/CAST(D.[Total Data File Size In Mb]
             AS decimal(8,1)) AS decimal(4,2)) AS [Log::Data Ratio]
FROM
        (
        SELECT [Database Name], [File Type], SUM([Total Size in Mb]) AS [Total Data File Size In Mb]
        FROM #usp_LogWatch_Results
        WHERE [File Type] = 'Data'
        GROUP BY [Database Name], [File Type]
        ) AS D INNER JOIN
        (
        SELECT [Database Name], [File Type], SUM([Total Size in Mb]) AS [Total Log File Size In Mb]
        FROM #usp_LogWatch_Results
        WHERE [File Type] = 'Log'
        GROUP BY [Database Name], [File Type]
        ) AS L ON D.[Database Name] = L.[Database Name]
WHERE L.[Total Log File Size In Mb] > 500 AND
        CAST(CAST(L.[Total Log File Size In Mb] AS decimal(8,1))/CAST(D.[Total Data File Size In Mb]
             AS decimal(8,1)) AS decimal(4,2)) > 0.5
ORDER BY CAST(CAST(L.[Total Log File Size In Mb] AS decimal(8,1))/CAST(D.[Total Data File Size In Mb]
             AS decimal(8,1)) AS decimal(4,2)) DESC,
        L.[Total Log File Size In Mb] DESC
--Clean up your temporary objects
DROP TABLE #usp_LogWatch_Results
0
 
LVL 8

Expert Comment

by:Toxacon
ID: 34919163
If it is already in Simple Model, the do an individual file shrink after a Full Backup. You can also use Enterprise Manager or SQL Management Studio to get a visual report on empty space in the Log.
0
 

Author Comment

by:anilkullam
ID: 34919549
in generic terms how much space does a log may require to insert one gb of data
0
 
LVL 11

Assisted Solution

by:JoeNuvo
JoeNuvo earned 250 total points
ID: 34921401
if you code to do insert

try make a BEGIN TRANSACTION / COMMIT TRANSACTION block, let say every 1000 or 10000 insert
this might help to control the log size.
0
 

Author Closing Comment

by:anilkullam
ID: 34923635
I still didnt get my answer yet guys...in generic terms/ideally how much space does the log require for an insert operation for inserting 1GB of data...

As far as the suggestions concerned cant go with them since the activity is being done by the support team where they sit at a diff place.

Out of ur experience, pls post the values
0

Featured Post

Free learning courses: Active Directory Deep Dive

Get a firm grasp on your IT environment when you learn Active Directory best practices with Veeam! Watch all, or choose any amount, of this three-part webinar series to improve your skills. From the basics to virtualization and backup, we got you covered.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

As technology users and professionals, we’re always learning. Our universal interest in advancing our knowledge of the trade is unmatched by most industries. It’s a curiosity that makes sense, given the climate of change. Within that, there lies a…
In part one, we reviewed the prerequisites required for installing SQL Server vNext. In this part we will explore how to install Microsoft's SQL Server on Ubuntu 16.04.
Via a live example, show how to shrink a transaction log file down to a reasonable size.
Viewers will learn how to use the INSERT statement to insert data into their tables. It will also introduce the NULL statement, to show them what happens when no value is giving for any given column.

688 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question