Link to home
Start Free TrialLog in
Avatar of Craig Beamson
Craig BeamsonFlag for United Kingdom of Great Britain and Northern Ireland

asked on

SQL Server 2005 - reduce database/logfile size on system that is running out of disk space

I need some basic advice on how to deal with SQL logfiles which seem excessive on a server with limited free filespace.

Rather worryingly, our old SQL Server database is on an equally old server whose free file system space is uncomfortably low.  The system is on a 60GB disk and disk space has dropped to a few GB.  Whilst there are various solutions involving upgrading the disk, OS, SQL version and so on, they're not ones that are going to happen quickly.  I need to free up some space relatively quickly before the system grinds to a halt.

The log files are often the same or larger in size than the database files and over the last 24 hours I've dabbled dangerously with shrink operations, many of which seem to have the opposite affect to the one intended, with the log files doubling up in size. Time to stop dabbling and seek some wisdom!

At present the largest of the databases is 7GB with a logfile now limited to 4GB.  (Over the course of the last 24 hours, my dabbling has caused the logfile to reach nearly 20GB but have now managed to get it down and set the 4GB threshold while I work out what to do.  Without this, the log file just seems to keep growing and I have no idea why.  I tried to see what was in the logfile using sys.fn_dblog - the first thing that happened was that the logfile size shot up and I had to abort! )
The disk now has 12GB free space.

I tried a backup of this database this morning but it failed with the exception message: "The transaction log for database 'mydatabase' is full."  The backup file created reached 9.5GB leaving 3GB space on the disk.  That's pretty much where I am now.

We've got a good leased line connection to this server, so can lift relatively large files off it if needs be.
The current recovery model is set to simple.

I could do with any help to identify why the logfiles just want to keep growing, pare them down to a reasonable minimum and do what I can to ensure the database isn't suffering as a result of any inadvertent disk/page/index fragmentation I may have caused!
Avatar of Zberteoc
Zberteoc
Flag of Canada image

Check what is the recovery model of your database. If it is Full and you don't do transaction log backups that is the reason of growing. Right click on the database > Properties > Options > in the right panel you will find the Recovery model info. If you don't need Full recovery model, which I don't recommend if it is used live in production environment,  switch it to Simple. That will automatically take care of your log growth in the future, however you will have to shrink it once it in order to recover the sapce . If you keep it full then you have to make sure you schedule regular transaction log backups at least every hour. The more often the better. This will keep the log size under control, however it will not guarantee to not grow. If you run large transactions then the log files has to grow and when it grows to a certain size it doesn't shrink back by itself. But that is unlikely to happen with a scheduled log backup.

To get to that point try these steps:

1. Try to backup the transaction log only.
2. I 1 successful you can try to shrink the log file.
3. Make sure your database is in Simple recovery mode, if you want SImple; Make sure you have a daily full backup scheduled.
4. For Full recovery mode you need schedule the transaction log backup every 15 min. You also need have to have a full backup scheduled as well. Make it daily.

For larger databases in Full recovery model the schedule of backups should be like this:

1. Full backup once a week, usually during week-end nights. Because full backup takes the most time and can be resource consuming, it is better to do it only once a week. I am not sure in your case as the databases don't seem to be too big but considering that the machine is  old it will take some time.
2. Differential backup daily during nights, except for the night when you do the full backup.
3.  Transaction log backups every 15min(this is up to you if more often or not)
Avatar of Craig Beamson

ASKER

Thanks for the suggestions.

In terms of following through your advice, I am struggling and going off in a few tangents.  Free disk space appears to hamper my attempts to do anything related to backups.  (Note there is no external backup drive, so all backups are to the file system on the server.)  I decided to reboot the server late yesterday evening after reading that servers that rarely get a shutdown can accrue SQL error messages that contribute to the log files.  After rebooting, the various statistics are:
- database is in Simple Recovery mode
- disk free space is 14.7GB
- database mdf disk size is 6.1GB (6 GB allocated, 2GB free)
- database log ldf disk size is 4 GB (restricted to 4GB by autogrowth settings, 4GB allocated, 2.5GB free)

If I attempt to backup either the database or the individual files, the first problem is that I cannot backup the transaction logfile, as suggested, while the database is in Simple recovery mode.

I have switched the recovery mode to full and try to backup the log again.  Now I get the error "BACKUP LOG cannot be performed because there is no current database backup."

So!
I try doing a database backup.  Ten minutes passes, disk space drops from 14GB to about 7GB and then the backup errors with "BACKUP DATABASE is terminating abnormally."
"The transaction log for database '<mydatabase>' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"
A 7GB bak file appears in the backup folder.

So in terms of moving forwards, I'm really not sure which direction to move in.  I'd like to get to the stage where I have a viable backup on disk and somehow stream it off the server.  I could extend the limit on the logfile maximum size but whatever I extend it to, it seems to want to fill and the backups seem to want more space than the database size.  The actual database is only 6GB of which 2GB is free space.  With 14GB free space, I seem unable to take a backup of it!

Puzzled.
You can do the backup on a network location but the SQL server account has to be able to access that.

If database is in simple recovery mode you can't do log backups, I never advised to that. You can leave it is simple recovery, but in that case you should not have problems with your log size unless there are big transactions, which I doubt. However, in simple recovery you would have to make backups daily or even more often. Having it in Full is better but you need to get backup space.

One thing though that you should change is the growth size for the log backup. 4GB is definitely huge when you database is only 6GB. Imagine that the log file just needs 10 extra MB, for the operations, it will allocate for that 4GB and it will stay like that after. Change it to 512 MB.
You can also check if you have space available in the database itself and if it is significant you can try a shrink database task by right click on the database > Tasks > Shrink > Database. In 2005 may be a little bit different in UI but similar. Shrink the Log file as well and then change the growth rate to both mdb and ldf file to 512MB.
One step closer!  The initial size of the log file was set high and the growth rate was set low. (So whenever I shrank the log file, it grew back almost immediately).  I've now reset its initial size and growth to multiples of 512MB with a 6GB cap, so it won't keep growing by fractional amounts.

However, despite not doing much with it for a few days, the transaction log is back at 6GB with only 500Mb space.  The database file is 6GB but has 3.4GB free.  I'm still unable to make a successful backup because when I try, the transaction log tries to expand and even if I shrink it to zero first, it still gets back to 6GB before the backup fails.  If I raise the upper limit of the transaction log, it fills it before a backup completes.

I just can't understand why a database of around 2.5GB of data needs such a huge transaction log which is so keen to grow.  Shrinking the transaction log in simple recovery mode just seems to be a temporary measure and the file just grows back.

I'd be really keen to understand what is going into the transaction log.  Maybe identify the SPs or other database actions that are having this effect.  Is there any easy way of doing this (that doesn't involve growing the transaction log!)?
You will have to post some images here to see what exactly do you have. What are the database files settings. Create this view:
CREATE VIEW [dbo].[vwGetDatabasesInfo]
as
/*
	select * from dbo.vwGetDatabasesInfo
--*/
SELECT 
     SERVERPROPERTY ( 'MachineName' ) as HostName,
     SERVERPROPERTY ( 'servername' ) as ServerName,
     SERVERPROPERTY ( 'instancename' ) as InstanceName,
     SERVERPROPERTY ( 'edition' ) as ServerEdition,
     SERVERPROPERTY ( 'ProductVersion' ) as ServerVersion,
     SERVERPROPERTY ( 'collation' ) as ServerCollation,
	cast(db.Name as varchar(50)) AS DatabaseName,
	db.create_date as DatabaseCreateDate,
	db.collation_name as DatabaseCollation,
	db.[compatibility_level] as DatabaseCompatibility,
	cast(DATABASEPROPERTYEX(db.Name, 'Status') as varchar(10)) AS DBStatus,
	cast(DATABASEPROPERTYEX(db.Name, 'Recovery') as varchar(10)) AS Recovery,
	cast(DATABASEPROPERTYEX(db.Name, 'Updateability') as varchar(15)) AS Updateability,
	cast(DATABASEPROPERTYEX(db.Name, 'UserAccess') as varchar(15)) AS UserAccess,
	cast(round(RowSizeMB,2) as decimal(10,2)) as RowSizeMB,
	cast(round(LogSizeMB,2) as decimal(10,2)) as LogSizeMB,
	cast(round(StreamSizeMB,2) as decimal(10,2)) as StreamSizeMB,
	cast(round(TextIndexSizeMB,2) as decimal(10,2)) as TextIndexSizeMB,
	cast(round(TotalSizeMB,2) as decimal(10,2)) as TotalSizeMB,
	fl.[file_id],
	fl.name as file_logical_name,
	fl.physical_name as physical_file_name,
	fl.type_desc as file_type,
	cast(round(fl.size*8.00/1024,2) as decimal(10,2)) as physical_file_sizeMB,
	fl.state as file_state,
	cast(fl.max_size as bigint)*8/1024 as file_max_sizeMB,
	fl.growth*8/1024 as file_growth,
	left(case fl.is_percent_growth when 1 then '%' else 'MB' end,2) as file_growth_type
	,num_of_reads
	,num_of_bytes_read
	,io_stall_read_ms
	,num_of_writes
	,num_of_bytes_written
	,io_stall_write_ms
	,io_stall
from 
	master.sys.databases db 
	inner join
	( -- get the db sizes
		SELECT
			db.database_id,
			(CAST(mfrows.RowSize AS FLOAT)*8)/1024 RowSizeMB,
			(CAST(mflog.LogSize AS FLOAT)*8)/1024 LogSizeMB,
			(CAST(mfstream.StreamSize AS FLOAT)*8)/1024 StreamSizeMB,
			(CAST(mftext.TextIndexSize AS FLOAT)*8)/1024 TextIndexSizeMB,

			(CAST(mfrows.RowSize AS FLOAT)*8)/1024+
			(CAST(mflog.LogSize AS FLOAT)*8)/1024+
			isnull((CAST(mfstream.StreamSize AS FLOAT)*8)/1024,0)+
			isnull((CAST(mftext.TextIndexSize AS FLOAT)*8)/1024,0) as TotalSizeMB
		    
		FROM master.sys.databases db
			LEFT JOIN (SELECT database_id, SUM(size) RowSize FROM master.sys.master_files WHERE type = 0 GROUP BY database_id, type) mfrows ON mfrows.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) LogSize FROM master.sys.master_files WHERE type = 1 GROUP BY database_id, type) mflog ON mflog.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) StreamSize FROM master.sys.master_files WHERE type = 2 GROUP BY database_id, type) mfstream ON mfstream.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) TextIndexSize FROM master.sys.master_files WHERE type = 4 GROUP BY database_id, type) mftext ON mftext.database_id = db.database_id
	) dbsize
		on dbsize.database_id=db.database_id
	inner join master.sys.master_files fl
		on fl.database_id=db.database_id
	inner join sys.dm_io_virtual_file_stats(null, null) AS divfs
		on divfs.database_id=db.database_id
		and divfs.[file_id]=fl.[file_id]



GO

Open in new window

Then execute this and post the result here:
SELECT 
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='your_db'

Open in new window

Here goes:

DatabaseName	DatabaseCreateDate	DBStatus	Recovery	Updateability	UserAccess	RowSizeMB	LogSizeMB	StreamSizeMB	TextIndexSizeMB	TotalSizeMB	file_logical_name	physical_file_name	file_type	physical_file_sizeMB	file_max_sizeMB	file_growth	file_growth_type
rcs	2014-06-16 20:58:38.727	ONLINE	SIMPLE	READ_WRITE	MULTI_USER	6002.25	6656.00	NULL	NULL	12658.25	rcs	C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\rcs.mdf	ROWS	6002.25	0	512	MB
rcs	2014-06-16 20:58:38.727	ONLINE	SIMPLE	READ_WRITE	MULTI_USER	6002.25	6656.00	NULL	NULL	12658.25	rcs_log	C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\rcs_log.ldf	LOG	6656.00	8192	512	MB

Open in new window

You will have to shrink the log file. The fact that you changed the growth has nothing to do with the log file size.

1. Shrink the database - I already gave instructions
2. Shrink the log file:

dbcc shrinkfile(rcs_log,1)

Open in new window

I have done that.
DbId	FileId	CurrentSize	MinimumSize	UsedPages	EstimatedPages
35	2	327680	128	327680	128

Open in new window


After 2 minutes, I checked the file size: 3GB with about 50% free.
Literally 30 seconds later (now) it is up to 4GB with 50% free.
(Just checked again - is up to 4.5GB)
Check in teh database > Propetries > Files what is the Initial Size for the log file.
Now I'm confused.  In my earlier post , I had managed to set the initial size back to 512Mb.  I shrunk the file to zero and quickly reset the initial size (in the properties > file dialogue boxes) to 512MB.  I could close down SQL Server Management Studio, go back into it and see the 512Mb value persisted as the initial value for the log file.

I've just checked now and its back at 6144 MB again.  Is there some sense in which the initial size for a database file does not "stick"?  Put another way: is there a correct way to set the initial size for an existing database log file that should hold?
My question was what is the numbers NOW for initial size in the properties. By the way, you CANNOT set the size to 0 if the file is not empty, which it can't be! The view I gave you doesn't show the initial size only the current size and the growth. That is why I asked about initial size.
Sorry if my reply was confusing.  It is now 6144 MB for initial size in properties.
That has to change. Apparently it doesn't change from the UI. You need to run this:
USE master
GO

CHECKPOINT;

-- shrink the log file to 512MB
DBCC SHRINKFILE (rcs_log,512);

--- define the initial size to 512MB as well
ALTER DATABASE [rcs]
MODIFY FILE (NAME=rcs_log,SIZE=512MB,MAXSIZE=UNLIMITED,FILEGROWTH=512MB);

Open in new window

Another question is if the file really shrunk or not. If the log file is not empty at the end of its virtual files sequence you will never be able to really shrink it. It will stay the same size regardless of f the DBCC SHRINKFILE command. In that case you have to move data inside the log file. I have a script for that but you have to confirm one thing:

Does the  DBCC SHRINKFILE (rcs_log,512);

command has ANY effect on the actual log file size?
Yes, performing this does have a (temporary) effect

Before performing the SHRINKFILE action: Filesize (as displayed in Explorer) 6,291,496 KB
Immediately after: 1,507,328 KB.

Within seconds rises to 2GB, then within minutes to 2.5GB, etc.
Do you have setup any replication or some other process that runs in the background?
(as displayed in Explorer)
What explorer is that?
No form of replication has been set up.

In terms of processes in the background, I'm not quite sure what you mean.  There are scheduled tasks within SQL server and within the Windows scheduler that periodically perform BULK INSERT commands and perform updates and inserts, or deletions to table data.  The largest table in this database is an IIS logfile table which has new rows added fairly frequently.
The display of file size was as-seen in Windows Explorer (whilst refreshing the display regularly).
There are scheduled tasks within SQL server and within the Windows scheduler that periodically perform BULK INSERT ...
The largest table in this database is an IIS logfile table which has new rows added fairly frequently.
So isn't this the reason of the log file growth?
I don't know.  How do I establish that?  The total size of the table is between 1 and 2 GB, growing at the rate of one or two 100MB per month.  It doesn't feel logical that this would explain why the log would grow by several GB within only hours of a Shrinkfile operation, or whenever I attempt a backup.  I still suspect your comment about the Initial Size setting has more bearing on why it's up to 6GB so quickly.  Why it shoots up so much further during an attempted  backup, I do not know.
ASKER CERTIFIED SOLUTION
Avatar of Zberteoc
Zberteoc
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks for this suggestion.  I have done this and run he sp every few seconds for the last 5 minutes.

Maybe every 5 or so seconds a row in a "site visitors" table will have the "pages_visited" value read and incrementally updated and every few minutes a query on our cross-reference database will do a quick select statement.  However, there's nothing obviously sizable going on.

I retried setting "initial size" on the logfile to 512Mb but again it completed successfully, then rose to 1,024MB, then to 1,536Mb, then 2,048Mb and now the initial size displays as 2,560Mb - correction: now 3,072Mb.

This post suggests the name "initial size" is misleading and it actually means "current size".  That being the case, I'm still not a lot clearer on why a logfile that was reduced to 0 in the last hour, has already grown past 3Gb while not a lot seems to be going on.

The whoisactive SP confirms what I expected in another database on the server where some large tables are updated every hour - but despite this their database and logfiles remain at a more healthy 1GB each, despite a lot more activity taking place.

I remain, as ever, puzzled!
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Sorry for the delayed response but thanks for the code.

Unfortunately, trying it, I just get the following errors:
Msg 208, Level 16, State 1, Line 2
Invalid object name 'zb_dba_maint.dbo.vwGetDatabasesInfo'.
Msg 8985, Level 16, State 1, Line 3
Could not locate file 'rcs_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped. 
Msg 208, Level 16, State 1, Line 2
Invalid object name 'zb_dba_maint.dbo.vwGetDatabasesInfo'.

Open in new window

Sorry, please replace the "zb_dba_maint" database name with the database name in your case. That is where you created that view. Also in regards to the "rcs_log" file name, I just used what you posted at some point. If that is not the real name you will have to change it to the real one.
Nearly!

Still getting:
Could not locate file 'rcs_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped.

Open in new window

However, in the results widow, I have 4 sets of results with the value for file_logical_name as "rcs_log".
This is also the logical name of the log file as it appears in database > properties > files.   Am I missing something?
You have to run that script in your database, not in master.
Aha - just seen the "use master" command at the top of the script.
Changed that and it now works.

I forgot to change the output to text so its tricky to copy/paste the results.
If they're needed, I'll run it again.
The summary is that the log file seems to have:
started at 7,680 Mb
reduced to 512Mb,
after 1 min: the same
after 2 min: grown to 1,024Mb

It broadly looked like it had settled at that for about 15-20 minutes.
Have just checked it again though and it has now risen again to 1,536Mb
Waited about 10-15 minutes before checking again - now at 2,048Mb

Is there any merit in running the script again with wider pause times?
No. Clearly there is something that grows that log in the background. I can't pinpoint what exactly. And by that I mean that it is impossible from distance not seeing what is going on on the server.
Can you confirm that the database recovery model is SIMPLE?
Yes the recovery model is simple.  However...

I think I may have found the problem!

I found the following code for inspecting items in the log file:
select TOP 1000 * from fn_dblog(null,null)

Open in new window

Whilst it doesn't tell you everything about the transaction, it does appear to identify the tables affected.  In the first 1,000 rows of log records, there's a very high number of records for one table in my database associated with website sessions.

I've done a bit of investigation into what SPs affect this table and discover that one which should increment the number of visits for the current session is missing a very important WHERE clause.  So every time that SP fires, it is incrementing a count on ALL of the records in that table, rather than the relevant row!

As there are half a million records in that table, and they are all being updated every time a new session is started on our website, I think that explains why the transaction log is almost constantly growing.

I've amended the code and have rerun your check - the transaction log has gone to 512Mb and appears to be staying there.  I will continue monitoring it but am hopeful that we've found the problem.
There you go! That log file doesn't increase with no reason! However that should had been caught from sp_whoisactive procedure. I am sure you will see that piece of code executing.
The sp_whoisactive check SHOULD have been the point where the issue was identified. Sadly though, I didn't make the link that the relevant SP was making changes to so many rows and hence filling up  the transaction log as it was.  The script to shrink the log and then periodically check its size helped prove the log was growing, as-believed and pushed the problem back to checking what it was that was going into the log. 2nd time around, spotted the issue.

Thanks for all of your help (and patience!) on this.
Glad I was of help!