Solved

SQL Server 2005 - reduce database/logfile size on system that is running out of disk space

Posted on 2016-08-10
36
37 Views
Last Modified: 2016-09-07
I need some basic advice on how to deal with SQL logfiles which seem excessive on a server with limited free filespace.

Rather worryingly, our old SQL Server database is on an equally old server whose free file system space is uncomfortably low.  The system is on a 60GB disk and disk space has dropped to a few GB.  Whilst there are various solutions involving upgrading the disk, OS, SQL version and so on, they're not ones that are going to happen quickly.  I need to free up some space relatively quickly before the system grinds to a halt.

The log files are often the same or larger in size than the database files and over the last 24 hours I've dabbled dangerously with shrink operations, many of which seem to have the opposite affect to the one intended, with the log files doubling up in size. Time to stop dabbling and seek some wisdom!

At present the largest of the databases is 7GB with a logfile now limited to 4GB.  (Over the course of the last 24 hours, my dabbling has caused the logfile to reach nearly 20GB but have now managed to get it down and set the 4GB threshold while I work out what to do.  Without this, the log file just seems to keep growing and I have no idea why.  I tried to see what was in the logfile using sys.fn_dblog - the first thing that happened was that the logfile size shot up and I had to abort! )
The disk now has 12GB free space.

I tried a backup of this database this morning but it failed with the exception message: "The transaction log for database 'mydatabase' is full."  The backup file created reached 9.5GB leaving 3GB space on the disk.  That's pretty much where I am now.

We've got a good leased line connection to this server, so can lift relatively large files off it if needs be.
The current recovery model is set to simple.

I could do with any help to identify why the logfiles just want to keep growing, pare them down to a reasonable minimum and do what I can to ensure the database isn't suffering as a result of any inadvertent disk/page/index fragmentation I may have caused!
0
Comment
Question by:Beamson
  • 20
  • 16
36 Comments
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41750353
Check what is the recovery model of your database. If it is Full and you don't do transaction log backups that is the reason of growing. Right click on the database > Properties > Options > in the right panel you will find the Recovery model info. If you don't need Full recovery model, which I don't recommend if it is used live in production environment,  switch it to Simple. That will automatically take care of your log growth in the future, however you will have to shrink it once it in order to recover the sapce . If you keep it full then you have to make sure you schedule regular transaction log backups at least every hour. The more often the better. This will keep the log size under control, however it will not guarantee to not grow. If you run large transactions then the log files has to grow and when it grows to a certain size it doesn't shrink back by itself. But that is unlikely to happen with a scheduled log backup.

To get to that point try these steps:

1. Try to backup the transaction log only.
2. I 1 successful you can try to shrink the log file.
3. Make sure your database is in Simple recovery mode, if you want SImple; Make sure you have a daily full backup scheduled.
4. For Full recovery mode you need schedule the transaction log backup every 15 min. You also need have to have a full backup scheduled as well. Make it daily.

For larger databases in Full recovery model the schedule of backups should be like this:

1. Full backup once a week, usually during week-end nights. Because full backup takes the most time and can be resource consuming, it is better to do it only once a week. I am not sure in your case as the databases don't seem to be too big but considering that the machine is  old it will take some time.
2. Differential backup daily during nights, except for the night when you do the full backup.
3.  Transaction log backups every 15min(this is up to you if more often or not)
0
 
LVL 1

Author Comment

by:Beamson
ID: 41751967
Thanks for the suggestions.

In terms of following through your advice, I am struggling and going off in a few tangents.  Free disk space appears to hamper my attempts to do anything related to backups.  (Note there is no external backup drive, so all backups are to the file system on the server.)  I decided to reboot the server late yesterday evening after reading that servers that rarely get a shutdown can accrue SQL error messages that contribute to the log files.  After rebooting, the various statistics are:
- database is in Simple Recovery mode
- disk free space is 14.7GB
- database mdf disk size is 6.1GB (6 GB allocated, 2GB free)
- database log ldf disk size is 4 GB (restricted to 4GB by autogrowth settings, 4GB allocated, 2.5GB free)

If I attempt to backup either the database or the individual files, the first problem is that I cannot backup the transaction logfile, as suggested, while the database is in Simple recovery mode.

I have switched the recovery mode to full and try to backup the log again.  Now I get the error "BACKUP LOG cannot be performed because there is no current database backup."

So!
I try doing a database backup.  Ten minutes passes, disk space drops from 14GB to about 7GB and then the backup errors with "BACKUP DATABASE is terminating abnormally."
"The transaction log for database '<mydatabase>' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"
A 7GB bak file appears in the backup folder.

So in terms of moving forwards, I'm really not sure which direction to move in.  I'd like to get to the stage where I have a viable backup on disk and somehow stream it off the server.  I could extend the limit on the logfile maximum size but whatever I extend it to, it seems to want to fill and the backups seem to want more space than the database size.  The actual database is only 6GB of which 2GB is free space.  With 14GB free space, I seem unable to take a backup of it!

Puzzled.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41752569
You can do the backup on a network location but the SQL server account has to be able to access that.

If database is in simple recovery mode you can't do log backups, I never advised to that. You can leave it is simple recovery, but in that case you should not have problems with your log size unless there are big transactions, which I doubt. However, in simple recovery you would have to make backups daily or even more often. Having it in Full is better but you need to get backup space.

One thing though that you should change is the growth size for the log backup. 4GB is definitely huge when you database is only 6GB. Imagine that the log file just needs 10 extra MB, for the operations, it will allocate for that 4GB and it will stay like that after. Change it to 512 MB.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41752582
You can also check if you have space available in the database itself and if it is significant you can try a shrink database task by right click on the database > Tasks > Shrink > Database. In 2005 may be a little bit different in UI but similar. Shrink the Log file as well and then change the growth rate to both mdb and ldf file to 512MB.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41756387
One step closer!  The initial size of the log file was set high and the growth rate was set low. (So whenever I shrank the log file, it grew back almost immediately).  I've now reset its initial size and growth to multiples of 512MB with a 6GB cap, so it won't keep growing by fractional amounts.

However, despite not doing much with it for a few days, the transaction log is back at 6GB with only 500Mb space.  The database file is 6GB but has 3.4GB free.  I'm still unable to make a successful backup because when I try, the transaction log tries to expand and even if I shrink it to zero first, it still gets back to 6GB before the backup fails.  If I raise the upper limit of the transaction log, it fills it before a backup completes.

I just can't understand why a database of around 2.5GB of data needs such a huge transaction log which is so keen to grow.  Shrinking the transaction log in simple recovery mode just seems to be a temporary measure and the file just grows back.

I'd be really keen to understand what is going into the transaction log.  Maybe identify the SPs or other database actions that are having this effect.  Is there any easy way of doing this (that doesn't involve growing the transaction log!)?
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756419
You will have to post some images here to see what exactly do you have. What are the database files settings. Create this view:
CREATE VIEW [dbo].[vwGetDatabasesInfo]
as
/*
	select * from dbo.vwGetDatabasesInfo
--*/
SELECT 
     SERVERPROPERTY ( 'MachineName' ) as HostName,
     SERVERPROPERTY ( 'servername' ) as ServerName,
     SERVERPROPERTY ( 'instancename' ) as InstanceName,
     SERVERPROPERTY ( 'edition' ) as ServerEdition,
     SERVERPROPERTY ( 'ProductVersion' ) as ServerVersion,
     SERVERPROPERTY ( 'collation' ) as ServerCollation,
	cast(db.Name as varchar(50)) AS DatabaseName,
	db.create_date as DatabaseCreateDate,
	db.collation_name as DatabaseCollation,
	db.[compatibility_level] as DatabaseCompatibility,
	cast(DATABASEPROPERTYEX(db.Name, 'Status') as varchar(10)) AS DBStatus,
	cast(DATABASEPROPERTYEX(db.Name, 'Recovery') as varchar(10)) AS Recovery,
	cast(DATABASEPROPERTYEX(db.Name, 'Updateability') as varchar(15)) AS Updateability,
	cast(DATABASEPROPERTYEX(db.Name, 'UserAccess') as varchar(15)) AS UserAccess,
	cast(round(RowSizeMB,2) as decimal(10,2)) as RowSizeMB,
	cast(round(LogSizeMB,2) as decimal(10,2)) as LogSizeMB,
	cast(round(StreamSizeMB,2) as decimal(10,2)) as StreamSizeMB,
	cast(round(TextIndexSizeMB,2) as decimal(10,2)) as TextIndexSizeMB,
	cast(round(TotalSizeMB,2) as decimal(10,2)) as TotalSizeMB,
	fl.[file_id],
	fl.name as file_logical_name,
	fl.physical_name as physical_file_name,
	fl.type_desc as file_type,
	cast(round(fl.size*8.00/1024,2) as decimal(10,2)) as physical_file_sizeMB,
	fl.state as file_state,
	cast(fl.max_size as bigint)*8/1024 as file_max_sizeMB,
	fl.growth*8/1024 as file_growth,
	left(case fl.is_percent_growth when 1 then '%' else 'MB' end,2) as file_growth_type
	,num_of_reads
	,num_of_bytes_read
	,io_stall_read_ms
	,num_of_writes
	,num_of_bytes_written
	,io_stall_write_ms
	,io_stall
from 
	master.sys.databases db 
	inner join
	( -- get the db sizes
		SELECT
			db.database_id,
			(CAST(mfrows.RowSize AS FLOAT)*8)/1024 RowSizeMB,
			(CAST(mflog.LogSize AS FLOAT)*8)/1024 LogSizeMB,
			(CAST(mfstream.StreamSize AS FLOAT)*8)/1024 StreamSizeMB,
			(CAST(mftext.TextIndexSize AS FLOAT)*8)/1024 TextIndexSizeMB,

			(CAST(mfrows.RowSize AS FLOAT)*8)/1024+
			(CAST(mflog.LogSize AS FLOAT)*8)/1024+
			isnull((CAST(mfstream.StreamSize AS FLOAT)*8)/1024,0)+
			isnull((CAST(mftext.TextIndexSize AS FLOAT)*8)/1024,0) as TotalSizeMB
		    
		FROM master.sys.databases db
			LEFT JOIN (SELECT database_id, SUM(size) RowSize FROM master.sys.master_files WHERE type = 0 GROUP BY database_id, type) mfrows ON mfrows.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) LogSize FROM master.sys.master_files WHERE type = 1 GROUP BY database_id, type) mflog ON mflog.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) StreamSize FROM master.sys.master_files WHERE type = 2 GROUP BY database_id, type) mfstream ON mfstream.database_id = db.database_id
			LEFT JOIN (SELECT database_id, SUM(size) TextIndexSize FROM master.sys.master_files WHERE type = 4 GROUP BY database_id, type) mftext ON mftext.database_id = db.database_id
	) dbsize
		on dbsize.database_id=db.database_id
	inner join master.sys.master_files fl
		on fl.database_id=db.database_id
	inner join sys.dm_io_virtual_file_stats(null, null) AS divfs
		on divfs.database_id=db.database_id
		and divfs.[file_id]=fl.[file_id]



GO

Open in new window

Then execute this and post the result here:
SELECT 
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='your_db'

Open in new window

0
 
LVL 1

Author Comment

by:Beamson
ID: 41756533
Here goes:

DatabaseName	DatabaseCreateDate	DBStatus	Recovery	Updateability	UserAccess	RowSizeMB	LogSizeMB	StreamSizeMB	TextIndexSizeMB	TotalSizeMB	file_logical_name	physical_file_name	file_type	physical_file_sizeMB	file_max_sizeMB	file_growth	file_growth_type
rcs	2014-06-16 20:58:38.727	ONLINE	SIMPLE	READ_WRITE	MULTI_USER	6002.25	6656.00	NULL	NULL	12658.25	rcs	C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\rcs.mdf	ROWS	6002.25	0	512	MB
rcs	2014-06-16 20:58:38.727	ONLINE	SIMPLE	READ_WRITE	MULTI_USER	6002.25	6656.00	NULL	NULL	12658.25	rcs_log	C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\rcs_log.ldf	LOG	6656.00	8192	512	MB

Open in new window

0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756595
You will have to shrink the log file. The fact that you changed the growth has nothing to do with the log file size.

1. Shrink the database - I already gave instructions
2. Shrink the log file:

dbcc shrinkfile(rcs_log,1)

Open in new window

0
 
LVL 1

Author Comment

by:Beamson
ID: 41756644
I have done that.
DbId	FileId	CurrentSize	MinimumSize	UsedPages	EstimatedPages
35	2	327680	128	327680	128

Open in new window


After 2 minutes, I checked the file size: 3GB with about 50% free.
Literally 30 seconds later (now) it is up to 4GB with 50% free.
(Just checked again - is up to 4.5GB)
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756686
Check in teh database > Propetries > Files what is the Initial Size for the log file.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41756741
Now I'm confused.  In my earlier post , I had managed to set the initial size back to 512Mb.  I shrunk the file to zero and quickly reset the initial size (in the properties > file dialogue boxes) to 512MB.  I could close down SQL Server Management Studio, go back into it and see the 512Mb value persisted as the initial value for the log file.

I've just checked now and its back at 6144 MB again.  Is there some sense in which the initial size for a database file does not "stick"?  Put another way: is there a correct way to set the initial size for an existing database log file that should hold?
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756753
My question was what is the numbers NOW for initial size in the properties. By the way, you CANNOT set the size to 0 if the file is not empty, which it can't be! The view I gave you doesn't show the initial size only the current size and the growth. That is why I asked about initial size.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41756787
Sorry if my reply was confusing.  It is now 6144 MB for initial size in properties.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756796
That has to change. Apparently it doesn't change from the UI. You need to run this:
USE master
GO

CHECKPOINT;

-- shrink the log file to 512MB
DBCC SHRINKFILE (rcs_log,512);

--- define the initial size to 512MB as well
ALTER DATABASE [rcs]
MODIFY FILE (NAME=rcs_log,SIZE=512MB,MAXSIZE=UNLIMITED,FILEGROWTH=512MB);

Open in new window

0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756801
Another question is if the file really shrunk or not. If the log file is not empty at the end of its virtual files sequence you will never be able to really shrink it. It will stay the same size regardless of f the DBCC SHRINKFILE command. In that case you have to move data inside the log file. I have a script for that but you have to confirm one thing:

Does the  DBCC SHRINKFILE (rcs_log,512);

command has ANY effect on the actual log file size?
0
 
LVL 1

Author Comment

by:Beamson
ID: 41756885
Yes, performing this does have a (temporary) effect

Before performing the SHRINKFILE action: Filesize (as displayed in Explorer) 6,291,496 KB
Immediately after: 1,507,328 KB.

Within seconds rises to 2GB, then within minutes to 2.5GB, etc.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756892
Do you have setup any replication or some other process that runs in the background?
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756895
(as displayed in Explorer)
What explorer is that?
0
Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

 
LVL 1

Author Comment

by:Beamson
ID: 41756910
No form of replication has been set up.

In terms of processes in the background, I'm not quite sure what you mean.  There are scheduled tasks within SQL server and within the Windows scheduler that periodically perform BULK INSERT commands and perform updates and inserts, or deletions to table data.  The largest table in this database is an IIS logfile table which has new rows added fairly frequently.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41756915
The display of file size was as-seen in Windows Explorer (whilst refreshing the display regularly).
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41756917
There are scheduled tasks within SQL server and within the Windows scheduler that periodically perform BULK INSERT ...
The largest table in this database is an IIS logfile table which has new rows added fairly frequently.
So isn't this the reason of the log file growth?
0
 
LVL 1

Author Comment

by:Beamson
ID: 41757107
I don't know.  How do I establish that?  The total size of the table is between 1 and 2 GB, growing at the rate of one or two 100MB per month.  It doesn't feel logical that this would explain why the log would grow by several GB within only hours of a Shrinkfile operation, or whenever I attempt a backup.  I still suspect your comment about the Initial Size setting has more bearing on why it's up to 6GB so quickly.  Why it shoots up so much further during an attempted  backup, I do not know.
0
 
LVL 26

Accepted Solution

by:
Zberteoc earned 500 total points
ID: 41757920
If initial size is 4 or 6 GB then the shrink would not take effect because it would grow back to that minimum size or even not shrink under that. I admit I don't know the exact behavior. However, if it DOES shrink and then grows back that quickly but not because of the initial size then the only explanation is that there is some process in the background that causes that and it is running pretty much all the time or very often at least. The best way to find that is to install Adam Mechanic's stored procedure and then execute it to see what processes are running in the background. Here you can download:

http://sqlblog.com/files/folders/beta/entry42453.aspx

You can run that script and install it in the master database and then just execute:

EXEC sp_whoisactive

and check the results. It will show you what are the active processes and what exactly they are doing every time you execute it.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41758056
Thanks for this suggestion.  I have done this and run he sp every few seconds for the last 5 minutes.

Maybe every 5 or so seconds a row in a "site visitors" table will have the "pages_visited" value read and incrementally updated and every few minutes a query on our cross-reference database will do a quick select statement.  However, there's nothing obviously sizable going on.

I retried setting "initial size" on the logfile to 512Mb but again it completed successfully, then rose to 1,024MB, then to 1,536Mb, then 2,048Mb and now the initial size displays as 2,560Mb - correction: now 3,072Mb.

This post suggests the name "initial size" is misleading and it actually means "current size".  That being the case, I'm still not a lot clearer on why a logfile that was reduced to 0 in the last hour, has already grown past 3Gb while not a lot seems to be going on.

The whoisactive SP confirms what I expected in another database on the server where some large tables are updated every hour - but despite this their database and logfiles remain at a more healthy 1GB each, despite a lot more activity taking place.

I remain, as ever, puzzled!
0
 
LVL 26

Assisted Solution

by:Zberteoc
Zberteoc earned 500 total points
ID: 41776778
Run this script in text mode and post the results here.:
use master
go

SELECT 'before' as execution,
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='rcs'
go
CHECKPOINT

DBCC SHRINKFILE (rcs_log,512);
GO

SELECT  'after' as execution,
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='rcs'

  waitfor delay '00:01'

  SELECT  'after 1 min' as execution,
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='rcs'

  waitfor delay '00:01'

  SELECT  'after 2 min' as execution,
	  [DatabaseName]
      ,[DatabaseCreateDate]
      ,[DBStatus]
      ,[Recovery]
      ,[Updateability]
      ,[UserAccess]
      ,[RowSizeMB]
      ,[LogSizeMB]
      ,[StreamSizeMB]
      ,[TextIndexSizeMB]
      ,[TotalSizeMB]
      ,[file_logical_name]
      ,[physical_file_name]
      ,[file_type]
      ,[physical_file_sizeMB]
      ,[file_max_sizeMB]
      ,[file_growth]
      ,[file_growth_type]
  FROM [zb_dba_maint].[dbo].[vwGetDatabasesInfo]
  where [DatabaseName]='rcs'

Open in new window

It will show the database info before log shrink, right after the shrink and then another 2 runs at 1 min interval.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41784915
Sorry for the delayed response but thanks for the code.

Unfortunately, trying it, I just get the following errors:
Msg 208, Level 16, State 1, Line 2
Invalid object name 'zb_dba_maint.dbo.vwGetDatabasesInfo'.
Msg 8985, Level 16, State 1, Line 3
Could not locate file 'rcs_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped. 
Msg 208, Level 16, State 1, Line 2
Invalid object name 'zb_dba_maint.dbo.vwGetDatabasesInfo'.

Open in new window

0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41786115
Sorry, please replace the "zb_dba_maint" database name with the database name in your case. That is where you created that view. Also in regards to the "rcs_log" file name, I just used what you posted at some point. If that is not the real name you will have to change it to the real one.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41786223
Nearly!

Still getting:
Could not locate file 'rcs_log' for database 'master' in sys.database_files. The file either does not exist, or was dropped.

Open in new window

However, in the results widow, I have 4 sets of results with the value for file_logical_name as "rcs_log".
This is also the logical name of the log file as it appears in database > properties > files.   Am I missing something?
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41786320
You have to run that script in your database, not in master.
0
 
LVL 1

Author Comment

by:Beamson
ID: 41786423
Aha - just seen the "use master" command at the top of the script.
Changed that and it now works.

I forgot to change the output to text so its tricky to copy/paste the results.
If they're needed, I'll run it again.
The summary is that the log file seems to have:
started at 7,680 Mb
reduced to 512Mb,
after 1 min: the same
after 2 min: grown to 1,024Mb

It broadly looked like it had settled at that for about 15-20 minutes.
Have just checked it again though and it has now risen again to 1,536Mb
Waited about 10-15 minutes before checking again - now at 2,048Mb

Is there any merit in running the script again with wider pause times?
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41786432
No. Clearly there is something that grows that log in the background. I can't pinpoint what exactly. And by that I mean that it is impossible from distance not seeing what is going on on the server.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41786436
Can you confirm that the database recovery model is SIMPLE?
0
 
LVL 1

Author Comment

by:Beamson
ID: 41787544
Yes the recovery model is simple.  However...

I think I may have found the problem!

I found the following code for inspecting items in the log file:
select TOP 1000 * from fn_dblog(null,null)

Open in new window

Whilst it doesn't tell you everything about the transaction, it does appear to identify the tables affected.  In the first 1,000 rows of log records, there's a very high number of records for one table in my database associated with website sessions.

I've done a bit of investigation into what SPs affect this table and discover that one which should increment the number of visits for the current session is missing a very important WHERE clause.  So every time that SP fires, it is incrementing a count on ALL of the records in that table, rather than the relevant row!

As there are half a million records in that table, and they are all being updated every time a new session is started on our website, I think that explains why the transaction log is almost constantly growing.

I've amended the code and have rerun your check - the transaction log has gone to 512Mb and appears to be staying there.  I will continue monitoring it but am hopeful that we've found the problem.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41787777
There you go! That log file doesn't increase with no reason! However that should had been caught from sp_whoisactive procedure. I am sure you will see that piece of code executing.
0
 
LVL 1

Author Closing Comment

by:Beamson
ID: 41787805
The sp_whoisactive check SHOULD have been the point where the issue was identified. Sadly though, I didn't make the link that the relevant SP was making changes to so many rows and hence filling up  the transaction log as it was.  The script to shrink the log and then periodically check its size helped prove the log was growing, as-believed and pushed the problem back to checking what it was that was going into the log. 2nd time around, spotted the issue.

Thanks for all of your help (and patience!) on this.
0
 
LVL 26

Expert Comment

by:Zberteoc
ID: 41787843
Glad I was of help!
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

There are some very powerful Data Management Views (DMV's) introduced with SQL 2005. The two in particular that we are going to discuss are sys.dm_db_index_usage_stats and sys.dm_db_index_operational_stats.   Recently, I was involved in a discu…
This article explains how to reset the password of the sa account on a Microsoft SQL Server.  The steps in this article work in SQL 2005, 2008, 2008 R2, 2012, 2014 and 2016.
Sending a Secure fax is easy with eFax Corporate (http://www.enterprise.efax.com). First, Just open a new email message.  In the To field, type your recipient's fax number @efaxsend.com. You can even send a secure international fax — just include t…
Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now