Solved

error log exploded because TL became FULL

Posted on 2011-09-06
19
303 Views
Last Modified: 2012-05-12
the recovery mode was full and the Transaction log file became full, hence it started triggering the error

LogDate      ProcessInfo      Text
2011-08-30 04:50:04.820      spid91      Error: 9002, Severity: 17, State: 2.
2011-08-30 04:50:04.820      spid91      The transaction log for database 'Test905' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

several times a second.. the log has been fixed, but the error log file has become 30GB.

i ran xp_readerrorlog 1, and it is at 34million records and still it has only finished 20 hours of such logging..

is it possible to avoid this kind of logging. (just warn once and leave it). what is the best standard to handle this kind of errors which can blow the error log file?
0
Comment
Question by:25112
  • 8
  • 6
  • 3
  • +2
19 Comments
 
LVL 2

Accepted Solution

by:
John_Bon earned 100 total points
ID: 36493020
You can rollover the Sql Server Error Log more often by scheduling the following code to execute from SQL Agent:

USE [master]
GO
DBCC ERRORLOG
GO

The following links explain this in more detail:
http://archive.msdn.microsoft.com/SQLExamples/Wiki/View.aspx?title=ErrorLogMon&referringTitle=AutomatedDBA
http://support.microsoft.com/kb/196909
0
 
LVL 12

Assisted Solution

by:sachitjain
sachitjain earned 200 total points
ID: 36493906
Are you not having backup strategy in place to backup your transaction logs regularily? If your db undergoes heavy volume of transactions, you should have a scheduled job backing up your transaction log every 5, 10 or 15 mins based on your requirement.
0
 
LVL 5

Author Comment

by:25112
ID: 36494318
OK- I see what can be done to mitigate..
but there is no way we can turn off the feature that it will give error message lie that every second, right?

just overnight, that alone can fill the disk (as in my case millions of records in a few hours)?
0
 
LVL 12

Expert Comment

by:sachitjain
ID: 36494429
Millions of records in a few hours could happen for high volume Prod dbs. For that reason only you need to have appropriate transaction log backup strategy in place.
0
 
LVL 5

Author Comment

by:25112
ID: 36494878
>>Millions of records in a few hours could happen for high volume Prod dbs.

our scenario is a test environment.. but the file was configured not optimally to begin wtih.. but if there are options to control the flow of the same error message which is logged millions of times, that will help further problems from happening.. that is the idea i am checking.. is there a setting to configure to stop same message flooding in the logs.?
0
 
LVL 12

Expert Comment

by:sachitjain
ID: 36494944
What kind of transactions are causing these many log records? Are these inserts, updates or deletes happening in bulk through some process. Could you please check if these inserts, updates or deletes are within BEGIN TRANSACTION-COMMIT block or not? Enclosing them within transactions may also help upto some extent if not yet done.
0
 
LVL 5

Author Comment

by:25112
ID: 36495903
sachitjain, the database was just configured recently.. but the person implemented them in the wrong drives. (with low space).. that is why the 'LOG IS FULL' errors started coming.. but that was a one moment event. But it has filled that one event into 100s of millions of records in the error log file (bloated from 500kb to 30GB).

0
 
LVL 12

Expert Comment

by:sachitjain
ID: 36500712
What is that one moment event? I believe once its starts happening, it keeps on repeating itself until your log is full. I think you need to find out root cause of it and fix it.
0
 
LVL 23

Expert Comment

by:Racim BOUDJAKDJI
ID: 36502456
<<is it possible to avoid this kind of logging. (just warn once and leave it). what is the best standard to handle this kind of errors which can blow the error log file? >>
Transactional logging has nothing to do with Error Logging.
0
Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

 
LVL 5

Author Comment

by:25112
ID: 36507539
sachitjain, the root cause was the log became full. we hope it won't happen, but there is no way to avoid these millions of duplicate entries in the error log?
0
 
LVL 5

Author Comment

by:25112
ID: 36507549
Racimo,

the 2 lines
"2011-08-30 04:50:04.820      spid91      Error: 9002, Severity: 17, State: 2.
2011-08-30 04:50:04.820      spid91      The transaction log for database 'Test905' is full. To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases"
has repeated 100s of millions of times in the error log file. (current size of error log file is 30GB).

so my observation is "Transactional logging effect produces duplicate Error Logging. " it further makes things worse. so i am checking if we can make the error logging more efficient with any setting/option, if possible?
0
 
LVL 12

Expert Comment

by:sachitjain
ID: 36508139
In that case take regular backup (say scheduled every 5-10 mins) of your transaction log to avoid its unlimited growth. Each time the transaction log is backed up, SQL Server removes all of the committed transactions in the log and writes them to the backup media. It would actually control the size of your transaction log. See following link:
http://databases.about.com/od/sqlserver/a/disaster_3.htm

Between you did not tell which db operation is causing millions of records in your transaction log.
0
 
LVL 5

Author Comment

by:25112
ID: 36511825
>>In that case take regular backup
fully agreed. still, for whatever reason if it should happen again, it will be better to have double protection, right? also the millions of lines of logging does not really help, but only hinders, doesn't it?

>>you did not tell which db operation is causing millions of records in your transaction log.
we believe it was an insert/update script maxed  the log file.

0
 
LVL 12

Assisted Solution

by:sachitjain
sachitjain earned 200 total points
ID: 36515209
Yes so second measure you could take is identify that particular insert/update mix and put that with transaction block (BEGIN TRANSACTION-COMMIT) if it is still not there.
0
 
LVL 75

Assisted Solution

by:Anthony Perkins
Anthony Perkins earned 100 total points
ID: 36516512
>>for whatever reason if it should happen again, it will be better to have double protection, right?<<
To my knowledge, this cannot be done.  The best you can do as suggested is to recycle your error log more frequently.  In other words you cannot disable error logging in SQL Server or even this particular error.  Yes, there are hacks, but I would never recommend you do that and certainly this is not the site for them.  

Incidentally, if you think your SQL Server error log is bad, then you have not seen your Windows Event logs.
0
 
LVL 23

Assisted Solution

by:Racim BOUDJAKDJI
Racim BOUDJAKDJI earned 100 total points
ID: 36516656
Tip: temporarily double check/disable all trace flags that  write in the error log.  Just to see if there is an effect.  
0
 
LVL 5

Author Comment

by:25112
ID: 36517521
sachitjain/acperkins - thanks for that good idea and perspective.. appreciate it.

Racimo, that won't be considered a 'hack' , right- just checking.. in an emergency, how would i turn off traces off? are these system traces you are referring to? if there won't be any 'side-effects' i am very willing to test it. can you please confirm.
0
 
LVL 23

Expert Comment

by:Racim BOUDJAKDJI
ID: 36518269
Disabling any extra should not have any side effect appart from a service restart.
0
 
LVL 5

Author Comment

by:25112
ID: 36542764
thanks for confirming..
0

Featured Post

How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

Join & Write a Comment

Introduction SQL Server Integration Services can read XML files, that’s known by every BI developer.  (If you didn’t, don’t worry, I’m aiming this article at newcomers as well.) But how far can you go?  When does the XML Source component become …
Load balancing is the method of dividing the total amount of work performed by one computer between two or more computers. Its aim is to get more work done in the same amount of time, ensuring that all the users get served faster.
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
This videos aims to give the viewer a basic demonstration of how a user can query current session information by using the SYS_CONTEXT function

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

9 Experts available now in Live!

Get 1:1 Help Now