Solved

High activity in tempdb and very large LDF files

Posted on 2004-10-18
16
8,480 Views
Last Modified: 2012-06-21
Does anyone know what is causing high activity in the temdb and how LDF files for some databases can be up to 3-4 times larger than the MDF file?
0
Comment
Question by:Jakob Adeltoft
  • 7
  • 6
  • 2
  • +1
16 Comments
 
LVL 6

Expert Comment

by:JaffaKREE
ID: 12339584
Operations such as reindexing, DB auto-grow and auto-shrink can occupy a large amount of log space without adding any data to the Database files.

Check your DB to see if these options are enabled.

You can use the SQL profiler to see the current operations running on the database.

running DBCC LOG (DBNAME, 4) will give you some clues as to what's in the log file.

0
 
LVL 50

Assisted Solution

by:Lowfatspread
Lowfatspread earned 100 total points
ID: 12340178
you are backing up and shrinking your transaction logs?

high activity in Tempdb indicates that either your doing a lot of sorts , or creating large temporary tables,
and perhaps need to investigate giving the SQL Server some more RAM...

0
 

Author Comment

by:Jakob Adeltoft
ID: 12341112
I have 78 databases in my SQL Server running 1 x P4 CPU 3Ghz and 2 GB RAM, where 1 GB are allocated to SQL Server (and it also uses 1GB). The machine is also running webserver. When monitoring CPU and Disk Queue activity, these values are medium (10-60%). But the tempdb database has much activity when monitoring Transactions/sec - about 60-80 times as much activity as any of the other 78 databases.

Only 5 databases have large LDF file - the rest has a LDF file that are always smaller than the MDF file (which is 2-10 MB). But the 5 large databases then has from 100-500 MB MDF file, but the LDF file is from 500MB - 6GB! I can see a link that the databases with much activity (both read/write) has the largest LDF files (5-6 GB).

Concerning reindexing, DB auto-grow and auto-shrink - what can I do about it? Isn't it nice to have auto-grow? My databases have the following settings:
1. DB Auto-grow for MDF and LDF: Automatically auto-grow by 10% (and "unrestrictd file growth" as maximum)
2. These are ON: Auto create statistics, auto update statistics, Torn page detection and Full Recovery level - what are these settings?

No I'm not doing any backup, nor shrinking of the databases. I have tried to do manually shrinking, but this didn't help.

As I mentioned there is a link between the databases with high activity and large LDF files. Those databases have also 3 very large log tables, with around 100.000-1.000.000 rows each - there are INSERTs into these tables every 30 seconds in these databases. Because we have an application that polls with that interval.

1. Would more RAM or CPU (dual) decrease the LDF files? Would backup help and why?
2. How do I know what to check for when running the DBCC LOG command?

0
 
LVL 2

Expert Comment

by:dogsdieinhotcars
ID: 12341206
If you don't backup your logfile, it will continue to expand until it takes over the known universe (or your hard drive :) )

So, do a full backup of your logfile and for your future sanity start regular backups.

To answer your other questions:

1.  More RAM or CPU would not help the LDF sizes.
2.  it's not DBCC --- you want to run

BACKUP LOG {database_name} TO <backup_device>
0
 

Author Comment

by:Jakob Adeltoft
ID: 12341262
This means that if I did a backup of one of the worst databases, I would immediately see the LDF file shrink?

I assume that the DBCC LOG is a command in order to get a clue of what is filling up my LDF files? BACKUP LOG [...] is a different thing right - doing a backup of my database?
0
 
LVL 2

Assisted Solution

by:dogsdieinhotcars
dogsdieinhotcars earned 150 total points
ID: 12341313
Good question, sorry I wasn't more complete in my answer.

Yes, you'll want to do the BACKUP LOG database TO backup  (if you need help formulating that command with your real database info, let me know)

Then you type

DBCC SHRINKFILE(yourlogfilename, sizeyouwanttoshrinkitto)

so with the pubs database, you'd type
BACKUP LOG pubs TO pubslogbackup
DBCC SHRINKFILE(publs_log,2)

That would shrink the file down to only 2Mb ...
0
 
LVL 6

Accepted Solution

by:
JaffaKREE earned 250 total points
ID: 12341335
Backing up the log will NOT Shrink the .LDFs, it will empty the "used" log space.  The LDF file size is determined and won't change unless the "used" space becomes larger than the total .LDF size, in which case the .LDF will grow.

the DBCC LOG might help a bit.  You get to see the low-level commands that have been logged.   Look for commands that repeat several thousand times and post them, although this stuff is poorly documented and it'll be mostly guesswork.

BACKUP LOG backs up the transaction log, and will empty the "used" log space.  

I have a feeling your 78-database server and its RAM are being crushed under massive usage, especially since it seems to have other uses.
0
 

Author Comment

by:Jakob Adeltoft
ID: 12341520
dogsdieinhotcars (yes that is in fact a shame ;-) ):
the DBCC SHRINKFILE will only shrink to the minimum size, right? This means that if there are 500MB of used space in the log file, then SHRINKFILE(dbname, 2) wouldn't be of much use?

jaffakree:
If the backup will empty the used space, then it will actually empty the whole LDF, right? The opposite of "used" would be "unused" and it would be difficult to remove unsed space...only if the LDF file does not auto-shrink, or what? Then the LDF will contain 1 GB of empty space or what and the backup will then remove this 1 GB of unused space?

The reason for my questions is that I'm in doubt if the large LDF files will decrease performance or if they are completely independant and have absolutely no impact on performance? I thought that there was a link between the large amount of Transactions/sec in the temdb and the size of the LDF files? So if I would decrease the size of the LDF files, then temdb activity would also drop significantly?

I'm actually about to get a new dedicated database server with 2 x 3Ghz XEON (512Kb L2 cache) and 2 GB RAM (could be nice with 4, right?). I think I've read something on sql-server-performance.com about the L2 cache - the larger the better for SQL Servers, but is would I actually achieve any improvement in real-life? I'm running my SQL Server 2000 on a Windows 2003 Server right now and will also do that on the new dedicated server - are there any known drawbacks in using Windows 2003 instead of 2000?
0
How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

 
LVL 6

Expert Comment

by:JaffaKREE
ID: 12341691
Jakob - Your logic is right, but you have it backwards - The LDFs are large BECAUSE of the high activity.  The activity isn't high because the LDFs are too big.  You can have a 50GB log with 0 activity if there just isn't anything going on.

I don't know of any serious differences between 2003 and 2000.  I think you can use all the RAM you can get your hands on, though.

0
 

Author Comment

by:Jakob Adeltoft
ID: 12341727
Ok, so the size of the LDF files is actually my own concern, right? As long as I'm having plenty of space, I should not worry? If this is the case I should care much more about the activity of the tempdb, right, because this is where I could gain some performance?

I'll take your advice and get alle the RAM I can get for my new server!
0
 
LVL 6

Expert Comment

by:JaffaKREE
ID: 12341781
You can set the overall size of the .LDFs through enterprise manager (properties, transaction log).  The size you see here is the size the LDF takes up on disk, at all times.  If it's a 5000MB .LDF, it always takes up 5000MB, even if only 2MB are used and 4998MB are empty.  You can have the file auto-grow, and unless it goes wildly out of control, it will manage itself through backups and grows.

Ways to increase TempDB performance -
add more RAM
put the TEMPDB mdf and ldfs on a separate, fast raid array (or at least their own separate drive)

You should try to figure out where all the activity is coming from.  Do you suspect a partciular database/application ?  tempdb doesn't just thrash about on its own - something is performing operations that are being executed in TempDB.

0
 

Author Comment

by:Jakob Adeltoft
ID: 12342008
Ok, I think I will do a Profiler on the tempdb - this would giva a view on what's going on in the tempdb?
0
 
LVL 6

Expert Comment

by:JaffaKREE
ID: 12342033
Yes, It should give you an idea of what's hitting it so hard.

Add the HostName column when you run the trace.

0
 

Author Comment

by:Jakob Adeltoft
ID: 12342055
Why the hostname - what does this tell me? What database that are causing the action in the tempdb?

BTW: Thank you all for your help - I have given you all points for your help.
0
 
LVL 6

Expert Comment

by:JaffaKREE
ID: 12342060
Hostname tells you what server is generating the request.

0
 

Author Comment

by:Jakob Adeltoft
ID: 12774766
In the meantime I've moved to a new faster SQL Server machine with 2xCPUs and 2 GB RAM. The Tempbd is though still making a lot of activity (transactions/sec) in the windows 2003 performance profiler. If monitoring activity using the SQL Profiler on only the tempdb there is a few requests every 20-30 seconds.

Regarding the large LDF files I haven't yet been able to get them smaller by shrinking or backup. What can I do??? They take up a lot of space - much more than the actual MDB files.
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Introduction: Often, when running a query with joins, the results show up "duplicates", and often, those duplicates can be "eliminated" in the results using DISTINCT, for example. Using DISTINCT is simple: just add it after the SELECT keyword, an…
Entering time in Microsoft Access can be difficult. An input mask often bothers users more than helping them and won't catch all typing errors. This article shows how to create a textbox for 24-hour time input with full validation politely catching …
Video by: Steve
Using examples as well as descriptions, step through each of the common simple join types, explaining differences in syntax, differences in expected outputs and showing how the queries run along with the actual outputs based upon a simple set of dem…
Polish reports in Access so they look terrific. Take yourself to another level. Equations, Back Color, Alternate Back Color. Write easy VBA Code. Tighten space to use less pages. Launch report from a menu, considering criteria only when it is filled…

705 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

20 Experts available now in Live!

Get 1:1 Help Now