Solved

SQL 2000 and shrink large database don't reduce log file size

Posted on 2009-07-03
12
1,060 Views
Last Modified: 2012-05-07
Hi to all,
I have one of my SQL Server (2000, Standard) that log file has growth drammatically and disk space is going to out.
Total size db is 50GB and log file is 44GB. Disk space available is 12GB.
I've tried to perform shrink, that in the past was ran successfully, but since two days when
try to shrink (from enterprise manager), operation is done with success but the log file still at the same size.
I would like to try this statement:
BACKUP LOG <databasename> WITH NO_LOG to empty log file and then proceed again with shrink but I'm worry about poor disk space available. The above statement will consume additional space and is it possible estimates time for execution ?
The server is in production h24 with high utilization.

Thanks in advance.
paolo
0
Comment
Question by:pablito70
  • 3
  • 3
  • 3
  • +2
12 Comments
 
LVL 57

Accepted Solution

by:
Raja Jegan R earned 250 total points
ID: 24770579
Do you have any Periodical Backups defined for your database.

If not define a Maintenance job to do the following: ( hope you have a Full backup job running prior)

1. After your Full backup, issue the following

BACKUP LOG ur_db_name WITH TRUNCATE_ONLY

2. Once it is done, Shrink your Log File alone to keep its size in control

DBCC SHRINKFILE ('ur_db_name', 1000);

Where 1000 defines the size of your Log file.

Doing this regularly would help you out.

If possible, create a transactional log backup at regular intervals and define the above tasks so that it helps you out.
0
 
LVL 31

Expert Comment

by:RiteshShah
ID: 24770602
can you please follow the command given in below link?

http://www.sqlhub.com/2008/05/sql-server-truncate-transaction-log.html

0
 
LVL 14

Assisted Solution

by:rob_farley
rob_farley earned 250 total points
ID: 24770697
Hold on... I'm not convinced about the advice that's been given here.

A database runs in either the Simple recovery model or the Full recovery model (there's a third, but it's similar to Full).

In Simple, the system can only be restored to full/differential backups, and the log is regularly truncated (but not shrunk - truncated just means emptied out).

In Full, the system can be restored to any point in time, because you've been taking log backups too. So the log only gets truncated when a log backup is done.

If you're doing a backup with truncate_only, then you're basically saying the system is running in Simple mode, and you should just change the database to that.

But... I'm not in favour of shrinking log files. It's better to monitor the portion of unused space in the log file, and then set the size to something which can easily cater for 'as large as it needs to be'. If you run out of space in the log file, you cause it to Autogrow, which can be a painful operation.

But if you're running in Simple mode, this should be a very rare occurrence. And if you're running Full, you should be backing up your transaction log a lot more often. Start with every 15 minutes, for example. If you have a disaster, you will need a database backup and all the log backups from since then - but the more regular the log backups, the smaller the used portion of the log file.

And remember - any time you do a TRUNCATE_ONLY backup (which doesn't actually back anything up), you lose the ability to restore your database to 'just now'.

I'd start by considering whether you want your database running in Simple mode or Full mode. And if you decide that you want Full, but you don't have the space to backup your log right now, then do this:

Backup your database (not your log) - you need to do this in case something goes wrong in the steps to follow.
Switch to Simple (database properties, options).
Run the T-SQL command "CHECKPOINT" to force a truncation.
Notice the amount of free space in your log file increase massively.
Shrink your log a bit (not completely - just a bit. Maybe to 5GB?).
Switch back to Full.
Backup your database again (because you can't restore log backups onto the previous backup).
Now start regularly backing up your tranasction log (but NOT using the truncate_only option).

I hope this helps clarify some things...

Rob
0
 
LVL 31

Expert Comment

by:RiteshShah
ID: 24770724
Hi Rob,

I am 100% agree with whatever you said but sometime one don't have any option other than truncate and shrink the log as Author is facing right now, in that case if you are not able to shrink than there is only option to truncate and shrink. Again I am agree, it is not good practice to shrink log frequently but in exception situation, should go for it.
0
 
LVL 14

Expert Comment

by:rob_farley
ID: 24770742
Yup - I've suggested shrinking it a bit, but definitely not 'regularly' as rrjegan17 recommended.
0
 
LVL 4

Expert Comment

by:Veerabhadraiahhv
ID: 24770948
hi
use this command to clear all the databases logs if it is morethan one MB


DBCC SQLPERF(LOGSPACE)
GO
IF OBJECT_ID('pCLEAR_LOG_SIZE') IS NOT NULL
BEGIN
      DROP PROCEDURE pCLEAR_LOG_SIZE
END
GO
CREATE PROCEDURE pCLEAR_LOG_SIZE
AS
BEGIN
DECLARE @DBNAME VARCHAR(128),@LOG_SIZE REAL,@LOG_PERCENT REAL
DECLARE @SQL VARCHAR(8000)

SET @SQL='IF OBJECT_ID(''LOGSPACE'') IS NOT NULL
              BEGIN DROP TABLE LOGSPACE END
              CREATE TABLE LOGSPACE(DBNAME VARCHAR(128),LOGSIZE REAL,LOGPERCENT REAL,STATUS INT)'
EXEC (@SQL)
SET @SQL='INSERT INTO LOGSPACE EXEC (''DBCC SQLPERF(LOGSPACE) WITH NO_INFOMSGS'')'
EXEC (@SQL)

DECLARE cLOGSPACE CURSOR FOR
      SELECT DBNAME,LOGSIZE,LOGPERCENT FROM LOGSPACE;

OPEN cLOGSPACE
FETCH NEXT
FROM cLOGSPACE INTO @DBNAME,@LOG_SIZE,@LOG_PERCENT

WHILE @@FETCH_STATUS=0
BEGIN
            IF @LOG_SIZE>=1.00
                  BEGIN
                        SET @SQL='BACKUP LOG '+@DBNAME+' WITH TRUNCATE_ONLY; DBCC SHRINKDATABASE('''+@DBNAME+''') WITH NO_INFOMSGS'
                        EXEC (@SQL)
                  END
            
      FETCH NEXT
      FROM cLOGSPACE INTO @DBNAME,@LOG_SIZE,@LOG_PERCENT;
END
CLOSE cLOGSPACE;
DEALLOCATE cLOGSPACE;
END

GO
EXEC pCLEAR_LOG_SIZE
GO
DBCC SQLPERF(LOGSPACE)

0
6 Surprising Benefits of Threat Intelligence

All sorts of threat intelligence is available on the web. Intelligence you can learn from, and use to anticipate and prepare for future attacks.

 
LVL 31

Expert Comment

by:RiteshShah
ID: 24770985
this seems two sided blade :) if you don't handle it with care, you will be injured ;)
0
 
LVL 2

Author Comment

by:pablito70
ID: 24773265
Hi to all,
thanks to all for answer.
Tried with solution suggested but the log file still remain on 44 GB (and 42 GB used).
Tonight I'll try to stop/start services and see what happen after rerun scripts suggested.
I will let you know about results.

Thanks in advance.
paolo
0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24775538
>> Tried with solution suggested but the log file still remain on 44 GB (and 42 GB used).

You need to Follow two steps which I clearly mentioned in my first comment.
Kindly try that once by specifying the appropriate log file size required in the second step.

Hope you haven't executed that statement at all.
0
 
LVL 14

Expert Comment

by:rob_farley
ID: 24775592
rrjegan17,

If 42gb is used, it won't shrink much.

Pablito - can you describe what you have done? For example, have you considered the Simple recovery model? Have you run a checkpoint command? Have you backed up the log?

Rob
0
 
LVL 57

Expert Comment

by:Raja Jegan R
ID: 24775626
rob_farley,

>> If 42gb is used, it won't shrink much.
   I requested to take a Full Backup of the database in my first step.
So if Full backup or even differential backup was taken, the active portion of transaction log will be freed up and hence we can issue the below to reclaim that space

BACKUP LOG ur_db_name WITH TRUNCATE_ONLY

And after that to release that free memory to OS / Windows, we need to shrink the Log file which would do as required.

DBCC SHRINKFILE ('ur_db_name', 1000);

Hope this clarifies
0
 
LVL 2

Author Comment

by:pablito70
ID: 24777580
Hi to all,
yesterday night I've made succesfully truncate log.
I've noticed that when transactions as insert statement with a large number of records,
shrinking do not empty log file; however stopping service and restarting service give me
a chance to perform a truncate log with statement:

BACKUP LOG db_datawarehouse WITH TRUNCATE_ONLY

and then shrink with:

USE SPP3DB
CHECKPOINT
GO

instead of:

DBCC SHRINKFILE ('SPP3DB', 1000)

The above statement, after 1.5 hours was not yet completed.
Please consider that the biggest large table has 8.6 million of records.
With CHECKPOINT only, time for shrink was less using enterprise manager.

Thanks to all for help.
paolo
0

Featured Post

Get up to 2TB FREE CLOUD per backup license!

An exclusive Black Friday offer just for Expert Exchange audience! Buy any of our top-rated backup solutions & get up to 2TB free cloud per system! Perform local & cloud backup in the same step, and restore instantly—anytime, anywhere. Grab this deal now before it disappears!

Join & Write a Comment

Suggested Solutions

Introduction In my previous article (http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SSIS/A_9150-Loading-XML-Using-SSIS.html) I showed you how the XML Source component can be used to load XML files into a SQL Server database, us…
JSON is being used more and more, besides XML, and you surely wanted to parse the data out into SQL instead of doing it in some Javascript. The below function in SQL Server can do the job for you, returning a quick table with the parsed data.
Familiarize people with the process of retrieving data from SQL Server using an Access pass-thru query. Microsoft Access is a very powerful client/server development tool. One of the ways that you can retrieve data from a SQL Server is by using a pa…
Viewers will learn how to use the SELECT statement in SQL and will be exposed to the many uses the SELECT statement has.

746 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now