?
Solved

Shrinking Logs

Posted on 2006-06-08
4
Medium Priority
?
1,443 Views
Last Modified: 2012-06-21
I have the code taken from Microsoft that shrinks the log file. It worked great except that I had to run it 3 times. Can it be reprogrammed to loop through until the log size it the same ans the size I tell it to be, ie. @newsize?

SET NOCOUNT ON
   DECLARE @LogicalFileName sysname,
           @MaxMinutes INT,
           @NewSize INT

   -- *** MAKE SURE TO CHANGE THE NEXT 4 LINES WITH YOUR CRITERIA. ***
   USE     NEWARCS              -- This is the name of the database
                                  -- for which the log will be shrunk.
   SELECT  @LogicalFileName = 'NEWARCS_LOG',  -- Use sp_helpfile to
      -- identify the logical file
      -- name that you want to shrink.
           @MaxMinutes = 30,      -- Limit on time allowed to wrap log.
           @NewSize    = 1       -- in MB

   -- Setup / initialize
   DECLARE @OriginalSize int
   SELECT @OriginalSize = size -- in 8K pages
     FROM sysfiles
     WHERE name = @LogicalFileName
   SELECT 'Original Size of ' + db_name() + ' LOG is ' +
           CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
           CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
     FROM sysfiles
     WHERE name = @LogicalFileName

   CREATE TABLE DummyTrans
     (DummyColumn char (8000) not null)

   -- Wrap log and truncate it.
   DECLARE @Counter   INT,
           @StartTime DATETIME,
           @TruncLog  VARCHAR(255)
   SELECT  @StartTime = GETDATE(),
           @TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'
   -- Try an initial shrink.
   DBCC SHRINKFILE (@LogicalFileName, @NewSize)

   EXEC (@TruncLog)

   -- Wrap the log if necessary.
   WHILE     @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
         AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)  -- the log has not shrunk    
         AND (@OriginalSize * 8 /1024) > @NewSize  -- The value passed in for new size is smaller than the current size.
     BEGIN -- Outer loop.
       SELECT @Counter = 0
       WHILE  ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
         BEGIN -- update
           INSERT DummyTrans VALUES ('Fill Log')  -- Because it is a char field it inserts 8000 bytes.
           DELETE DummyTrans
           SELECT @Counter = @Counter + 1
         END   -- update
       EXEC (@TruncLog)  -- See if a trunc of the log shrinks it.
     END   -- outer loop
   SELECT 'Final Size of ' + db_name() + ' LOG is ' +
           CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
           CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
     FROM sysfiles
     WHERE name = @LogicalFileName
   DROP TABLE DummyTrans
   PRINT '*** Perform a full database backup ***'
   SET NOCOUNT OFF
                              
0
Comment
Question by:arajoe
  • 2
4 Comments
 
LVL 75

Expert Comment

by:Anthony Perkins
ID: 16863741
Why in the world are you shrink the log?  Please don't say that you are running out of disk space.
0
 
LVL 50

Expert Comment

by:Lowfatspread
ID: 16863762
you mighht like to try this...

/*
    SHRINK LOG  SCRIPT


Q. Why won't my log shrink in SQL 7?

A. This is because the log file is internally divided into smaller virtual logs and shrinking will only happen when the active part of the log is at the beginning of your log file. You can use the DBCC LOGINFO(dbname) command to see where the active part of the transaction log is - any logs marked with a Status of 2 are active.

The output below shows that the active virtual log file is the last in the physical file, therefore no real shrinkage is possible.

FileId FileSize StartOffset FSeqNo Status Parity CreateTime
------ -------- ----------- ------ ------ ------ -----------------------
2 13041664 8192 0 0 0 1999-03-16 10:27:24.917
2 13041664 13049856 0 0 0 1999-03-16 10:25:56.730
2 13041664 26091520 0 0 0 1999-03-16 10:25:56.730
2 13041664 39133184 0 0 0 1999-03-16 10:25:56.730
2 13041664 52174848 0 0 0 1999-03-16 10:25:56.730
2 13041664 65216512 0 0 0 1999-03-16 10:25:56.730
2 13041664 78258176 0 0 0 1999-03-16 10:25:56.730
2 13557760 91299840 5 2 64 1999-04-06 12:32:27.833

In order to be able to shrink the log :-

1. Backup or truncate the log to make sure earlier Virtual Log Files are reusable (check their status)
2. Execute dummy transactions (in a loop) against a test table until the active virtual log moves back to the start of the list.
3. Execute DBCC SHRINKDATABASE or DBCC SHRINKFILE to mark a shrinkpoint. (The actual shrink is performed asynchronously normally)
4. Issue a backup or truncate log command to force the shrinkage to happen immediately.

If it is at the end of the log file you could write a small while loop that does some inserts in a test table to move the active part to the beginning of the real file. Then the SHRINKFILE command should work ok - note that SHRINKFILE works asynchronously.

As a last resort you can always checkpoint the database, shutdown SQL Server and delete the physical log file. When SQL restarts it will create a new 2Mb log device. Note that this is unsupported by Microsoft and whilst it has always worked for the author, others have reported problems doing this.

The following stored procedure was first published in the February 2000 issue of Microsoft SQL Server Professional, Pinnacle Publishing. Reproduced here by kind permission of Andrew Zanevsky.
*/


--use e2systemtest
--go

if object_id( 'sp_force_shrink_log' ) is not null drop proc sp_force_shrink_log

go

create proc sp_force_shrink_log
--------------------------------------------------------------------------------
-- Purpose: Shrink transaction log of the current database in SQL Server 7.0.
-- Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
-- zanevsky@azdatabases.com
--------------------------------------------------------------------------------
@target_percent tinyint = 0,
@target_size_MB int = 10,
@max_iterations int = 1000,
@backup_log_opt nvarchar(1000) = 'with truncate_only'
as
set nocount on

declare @db sysname,
@last_row int,
@log_size decimal(15,2),
@unused1 decimal(15,2),
@unused decimal(15,2),
@shrinkable decimal(15,2),
@iteration int,
@file_max int,
@file int,
@fileid varchar(5)

select @db = db_name(),
@iteration = 0

create table #loginfo (
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
CreateTime datetime
)

create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )

create table #logfiles ( id int identity(1,1), fileid varchar(5) not null )
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
select @file_max = @@rowcount

if object_id( 'table_to_force_shrink_log' ) is null
exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )

insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo

select @unused1 = @unused -- save for later

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )

while @shrinkable * 100 / @log_size > @target_percent
and @shrinkable > @target_size_MB
and @iteration < @max_iterations begin
select @iteration = @iteration + 1 -- this is just a precaution

exec( 'insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log')

select @file = 0
while @file < @file_max begin
select @file = @file + 1
select @fileid = fileid from #logfiles where id = @file
exec( 'dbcc shrinkfile( ' + @fileid + ' )' )
end

exec( 'backup log [' + @db + '] ' + @backup_log_opt )

truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, CreateTime ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount

select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo

select 'iteration' = @iteration,
'log size, MB' = @log_size,
'unused log, MB' = @unused,
'shrinkable log, MB' = @shrinkable,
'shrinkable %' = convert( decimal(6,2), @shrinkable * 100 / @log_size )
end

if @unused1 < @unused
select 'After ' + convert( varchar, @iteration ) +
' iterations the unused portion of the log has grown from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB.'
union all
select 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10
union all
select 'you may try running this procedure again with a higher number of iterations.' where @unused > 10
union all
select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10

else
select 'It took ' + convert( varchar, @iteration ) +
' iterations to shrink the unused portion of the log from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB'

exec( 'drop table table_to_force_shrink_log' )

Return
go

exec sp_force_shrink_log
 --   @target_size_MB=1
go
drop proc sp_force_shrink_log
go

0
 

Author Comment

by:arajoe
ID: 16863792
Sorry I am using SQL2000. I want to shrink it beacause I am deleting a bunch of records. This explodes the log to upwards of 3gigs. Is there a different/better way of doing it? I am not neccesarily running out of diskspace.
0
 
LVL 75

Accepted Solution

by:
Anthony Perkins earned 2000 total points
ID: 16863815
All you have to do is backup the database and Transaction Log.  This last will mark all inactive entries and you can then shrink the Transaction Log.  Just understand that repeated SHRINKing the Transaction Log, is not good for performance.
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I have a large data set and a SSIS package. How can I load this file in multi threading?
This article shows gives you an overview on SQL Server 2016 row level security. You will also get to know the usages of row-level-security and how it works
Using examples as well as descriptions, and references to Books Online, show the documentation available for date manipulation functions and by using a select few of these functions, show how date based data can be manipulated with these functions.
Via a live example, show how to shrink a transaction log file down to a reasonable size.
Suggested Courses

807 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question