?
Solved

Sql Server 2000 - Log Shipping Jobs / Slowness

Posted on 2004-10-05
14
Medium Priority
?
382 Views
Last Modified: 2011-04-14
We have Sql servers set up to Log ship - One source, one destination, one monitor, roles never change.
Sometimes the restore log job will fail because the source log has not finished copying to the database server.

The logs are typically 25K-2MB.  However, sometimes they will jump up to between 60MB and 300MB.  Is there a specific reason for this, related to log shipping ?  That amount of data isn't getting loaded into the database, which is only about 2.2GB total.  Plus, these size jumps don't follow any set pattern, sometimes happening in the early morning hours.  I was thinking possibly a transaction-log intensive reindex or similar, but I don't see any scheduled jobs that match this.

Even at 300MB, does ~20 minutes seem very sluggish ?  Copying over a 300 meg file only takes about 45 seconds manually.

Thanks,

JK



 
0
Comment
Question by:JaffaKREE
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 8
  • 5
14 Comments
 
LVL 34

Accepted Solution

by:
arbert earned 2000 total points
ID: 12232819
The size of the log will depend on the amount of transactions that have taken place....I would run profiler and see if you can find any transactions taking place that you weren't "aware" of.  Also, when do full backups take place (or do you run them)?

Personally, I think the built-in logshipping leaves a lot to be desired and is error proned.  I still, even though maintenance plans do it automatically, like to manually configure the logshipping for total control (here is a link to the hows http://www.sql-server-performance.com/sql_server_log_shipping.asp)
0
 
LVL 43

Expert Comment

by:Eugene Z
ID: 12232889
1. Run SQL profiler  to grab the early morning '300MB' process(s)
2. see what sql agent jobs are running that time
3. 300MB in 20min  vs. 300MB in 1 min  it is not good
but slowness can be a result of sql running process at time of shipping  (see sql process) that can result blocking\locks
4, collect info about what is going on - than analyze it
...
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12240280
Thanks for the input.  I'm going to run the sql profiler and save the traces in half-hour increments, since the shipped transaction logs are based on 30 minutes.

Are there some good sites that have examples of gathered trace data and its interpretation ?
0
Get MongoDB database support online, now!

At Percona’s web store you can order your MongoDB database support needs in minutes. No hassles, no fuss, just pick and click. Pay online with a credit card. Handle your MongoDB database support now!

 
LVL 34

Expert Comment

by:arbert
ID: 12240408
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12241325
I ran a traces against the database between 2 and 2:30, and 2:30 and 3PM.

the 2-2PM log file was 1.5 MB, the 2:30-3PM log file was 203MB, well over a hundred times larger.  The trace files, though, were approximately the same size.

I used the 'standard trace' template file and ran that trace against the database.  Most of the events I see are select statements, being executed by application users.  I didn't really see anything that made me think it was the culprit, but there's thousands of rows.  very few insert/update/deletes.

Ideally, I would like to have the trace take ONLY events that are writing to the transaction log of this particular database.  Can that be specified ?

Thanks,
JK

0
 
LVL 34

Expert Comment

by:arbert
ID: 12241426
"Ideally, I would like to have the trace take ONLY events that are writing to the transaction log of this particular database.  Can that be specified ?"

Not really, you would have to look at activity just for that database....Were there any DBCC jobs or SQL Agent transactions?  Also, like I asked above, "Also, when do full backups take place (or do you run them)?"
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12241663
Full backups occur once, daily, at 3AM.

The only DBCC jobs I see were between 2:29:30 and 2:30:00, DBCC SHOW_STATISTICS jobs.  They seem to be surrounded by some exec [various stored procedures] that are 3rd-party application procs.  

I'll probably run a longer trace and see if this pattern precedes each of the large log creations.

Other suggestions ?

Thanks,
JK
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12249734
There is a "Transaction Log" trace item.  It seems like this should have a # of writes associated with it, since it refers to a transaction log hit, but the field is always blank.  Am I not understanding it correctly ?

Thanks,
JK
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12253475
Using the DBCC LOG command, I carefully monitored the size of the database logs.

4:27 PM - 32,133
4:34 PM - 33,169
4:36 PM - 33,500
4:42 PM - 35,143
4:45 PM - 2,008,661

Seems like between 4:42 and 4:45, the log gets PACKED with the following row:

OPERATION= LOP_SHRINK_NOOP          CONTEXT=LCX_NULL

I expect to see this same behavior around 6:42, 8:42, and so forth. The question is, what is LOP_SHRINK_NOOP ?  Possibly an autoshrink or autogrow ?

   


0
 
LVL 34

Expert Comment

by:arbert
ID: 12254239
What parms did you use with the DBCC LOG (3,4,and -1 will return the most info).  As far as the output, it's not documented and it's really difficult to find out what the information means--can you cross reference it with the profiler trace at the same time--the profiler trace should pickup autoshrink and autogrow (make sure you trace Warnings)....
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12257956
I used 4.  The description field is blank for those operations, though (It's blank for mostly everything).  I'll run the trace and hopefully pick up something.  Thanks.

0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12258347
There are some events right around the time when the log jumped ( It was about 8:45:57 as far as I could make out) that don't appear anywhere else in the trace.    The EventClass is "Hash warning", subclass "Hash recursion".  It happens 4 times in a row with a couple of Single pass sorts around it, then again a few seconds later.  Do you know what this is indicative of ?  The TextData field is blank.  Application Name is .Net SqlClient Data Provider.
0
 
LVL 34

Expert Comment

by:arbert
ID: 12258615
So there is a .NET application running somewhere that is using a query that causes the use of hashing in a query.  Did it have a computer/host attached to it so you can tell where it's running from?
0
 
LVL 6

Author Comment

by:JaffaKREE
ID: 12277149
I think the .NET thing was coincedental.   I'll close this and open a new question to see if anyone happens to know what that log operation is.
0

Featured Post

Moving data to the cloud? Find out if you’re ready

Before moving to the cloud, it is important to carefully define your db needs, plan for the migration & understand prod. environment. This wp explains how to define what you need from a cloud provider, plan for the migration & what putting a cloud solution into practice entails.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
Ever needed a SQL 2008 Database replicated/mirrored/log shipped on another server but you can't take the downtime inflicted by initial snapshot or disconnect while T-logs are restored or mirror applied? You can use SQL Server Initialize from Backup…
Via a live example, show how to backup a database, simulate a failure backup the tail of the database transaction log and perform the restore.
Viewers will learn how to use the UPDATE and DELETE statements to change or remove existing data from their tables. Make a table: Update a specific column given a specific row using the UPDATE statement: Remove a set of values using the DELETE s…
Suggested Courses

752 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question