Link to home
Start Free TrialLog in
Avatar of Gerhardpet
GerhardpetFlag for Canada

asked on

Event ID 9002 The transaction log for database 'SCData_Default' is full

I have a Server running a network monitoring tool (Level Platforms) with an SQL database. The database file (SCData_Default.mdf) is currently 801 MB and sudently overnight the log file (SCData_Default_log.LDF) grow to 199 GB. It basically filled up the partition the database is on. I notice that something was wrong when the backup failed overnight.

Any ideas why? Level Platforms support is closed today so I can't call them.

Here are a few event ID's  


Event Type:      Error
Event Source:      MSSQL$SERVICECENTER
Event Category:      (2)
Event ID:      9002
Date:            11/28/2010
Time:            12:13:52 PM
User:            NT AUTHORITY\SYSTEM
Computer:      LPSERVER
Description:
The transaction log for database 'SCData_Default' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases  

Event Type:      Error
Event Source:      MSOLAP$SERVICECENTER
Event Category:      (289)
Event ID:      11
Date:            11/28/2010
Time:            12:13:28 PM
User:            N/A
Computer:      LPSERVER
Description:
An error occurred while writing a trace event to the file, \\?\E:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Log\FlightRecorderCurrent.trc.

Event Type:      Error
Event Source:      MSOLAP$SERVICECENTER
Event Category:      (289)
Event ID:      10
Date:            11/28/2010
Time:            12:13:28 PM
User:            N/A
Computer:      LPSERVER
Description:
An error occurred while closing the trace output file, \\?\E:\Program Files\Microsoft SQL Server\MSSQL.2\OLAP\Log\FlightRecorderCurrent.trc.
Avatar of Qlemo
Qlemo
Flag of Germany image

We cannot know why the log file grew that fast. But obviously there has been a lot going on, e.g. many deletes/inserts/updates, and maybe no log backup? I assume the database is in Full or Bulk-Logged Recovery Mode, and you need to backup the transaction log periodically to keep it small.

Did you truncate the logfile, and performed a full database backup already to reclaim space?
Avatar of Gerhardpet

ASKER

I backup the log file every night but have not truncated the log file. I don't have a lot of experience with SQL. How do I truncate the log file?

Like the said the database size is 801MB right. If the log file is that big does that mean the database is full? How do I know if the database is in Full or Bulk-Logged Recovery Mode?
ASKER CERTIFIED SOLUTION
Avatar of Qlemo
Qlemo
Flag of Germany image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
a bit of background reading http://support.microsoft.com/kb/317375 may help as well as other expert comments
It is not urgent that the tool/database in functional so I will contact Level Platform support for help considering that I don't have a lot of experience working with MS SQL.

Once solve I will post  the results here and award points.

Thank you for the help so far.
To understand the core of the problem, you must excute the following command in your server:

Step 1, Check what is prohibiting the log truncation
Step 1. SELECT* FROM SYS.DATABASES
and in the output look for log_reuse_wait_desc column and post the result.

Step 2. Check if there is a long running transaction that is consuming the log file
USE <DBNAME>
GO
DBCC OPENTRAN
and post the result

-- Check the contents of VLF segments for advance analysis, not needed for not though
Step 3. DBCC LOG ( 'DBNAME' )
and post the result.

As for immediate solution:
1. If your recovery model is FULL/ BULK LOGGED (which you can check by right clicking on the database and the view properties and then click options on the left pane).  You can then change the recovery model from FULL to SIMPLE and click OK. After this, right click on the database again, click shrink file and then select type of file as log and then type the size (which is given as minimum) and click on OK. Change the recovery model to FULL/ BULK LOGGED again. Check the size again.

2. You may have to run this command multiple times
USE DBNAME
GO
DBCC SHRINKFILE ( 'log_file_name', TRUNCATEONLY )
-- Replace log_fie_name with the appropriate name
-- Check the size again.

3. If the recovery model is not FULL or BULK LOGGED  i.e. it is SIMPLE then check the output of the DBCC OPENTRAN, if you are seeing a long running transaction then you want to kill that assuming it can be killed. But you must post the output of DBCC OPENTRAN in your post before going for the KILL command.


HTH

-Manu
In my previous post, there were 3 possible solutions i posted, you must try 1 or 2 or 3. Depending upon wthether the previous one worked or not.

-Manu
Ok I have change the DB to SIMPLE  and done a shrink file. For now I have left the DB at SIMPLE which will meet the needs for the application.

I'm monitoring the situation so see if it will happen again.
 
For step here are the results...I'm working on the rest

master	NOTHING
tempdb	ACTIVE_TRANSACTION
model	LOG_BACKUP
msdb	NOTHING
ReportServer$ServiceCenter	NOTHING
ReportServer$ServiceCenterTempDB	NOTHING
SCMaster	NOTHING
MWSessionState	NOTHING
SCData_Default	CHECKPOINT

Open in new window

SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yes I was able to reclaim the space and it looks normal now.

I have now changed the database back to FULL recovery mode and will monitor the database to see if it continues to function normally

Thank you for the help!
Now my log file has grown again to 625MB and my database is. The database size is 858MB.  I did a backup of the log file and now I have 95% available free space (before the backup I had 3%). The backup of the log file is 598MB.

Should I shrink it again? How often should I backup the log file to claim available space? The documentation says once a week. Is that good?
If it is FULL then you should be taking the log backup every 30 minutes/ 1 hour, depending upon the criticality of the database and other factors.

Once you have taken the log back-up the size of the log file will be reduced automatically, thats how it works. However for SIMPLE recovery the concept of T-Log backup does not apply.

What is the recovery model of your DB ?

-Manu
To correct what manukapoor wrote:
You should backup your transaction log with a frequence you can live with. That is usually a compromise of backup size, max acceptable transaction log file size and period for which changes need to be retained if the DB crashes.
A transaction log backup will NOT shrink the logfile, and you should NOT do that yourself too often, see my comment in http:#a34226040
Qlemo:
1. The period of 30 minutes/ 1 hour was proposed as a commonly used standard in the industry, of course there are plenty of other factors like RPO and RTO. In a layman language that gives a fair idea.
2. SQL Server does causes the truncation of the records from the log file once they have been backed-up successfully. This is as good as saying that the log file size will be reduced as a result of this operation.

Though I do not recommend SHRINKFILE WITH TRUNCATEONLY as a practice but this can be used (And sometime must) if nothing else is workable, for example 400 GB log file and no more space is available to backup the log.

Any thoughts ?

-Manu
Update:
After consulting with the vendor I have decided to use simple mode for DR. We use StorageCraft for backups and we can set it to backup every hour or more often as incremental. I have done test SQL restores in StorageCraft and it works fine. This will meet out DR needs.

Since we have more then one SQL server it is not efficient to use and maintain backup schedules within each server with SQL Server Management Studio.

Thank you for all the help and explanation. Points will be awarded.

I'm attaching a file with the explanation of the vendors Recovery Model
An-Explanation-of-Recovery-Model.txt
I have picked the most relevant post for points but most information in this question are useful