• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 393
  • Last Modified:

Sql Server 2000 - Log Shipping Jobs / Slowness

We have Sql servers set up to Log ship - One source, one destination, one monitor, roles never change.
Sometimes the restore log job will fail because the source log has not finished copying to the database server.

The logs are typically 25K-2MB.  However, sometimes they will jump up to between 60MB and 300MB.  Is there a specific reason for this, related to log shipping ?  That amount of data isn't getting loaded into the database, which is only about 2.2GB total.  Plus, these size jumps don't follow any set pattern, sometimes happening in the early morning hours.  I was thinking possibly a transaction-log intensive reindex or similar, but I don't see any scheduled jobs that match this.

Even at 300MB, does ~20 minutes seem very sluggish ?  Copying over a 300 meg file only takes about 45 seconds manually.

Thanks,

JK



 
0
JaffaKREE
Asked:
JaffaKREE
  • 8
  • 5
1 Solution
 
arbertCommented:
The size of the log will depend on the amount of transactions that have taken place....I would run profiler and see if you can find any transactions taking place that you weren't "aware" of.  Also, when do full backups take place (or do you run them)?

Personally, I think the built-in logshipping leaves a lot to be desired and is error proned.  I still, even though maintenance plans do it automatically, like to manually configure the logshipping for total control (here is a link to the hows http://www.sql-server-performance.com/sql_server_log_shipping.asp)
0
 
Eugene ZCommented:
1. Run SQL profiler  to grab the early morning '300MB' process(s)
2. see what sql agent jobs are running that time
3. 300MB in 20min  vs. 300MB in 1 min  it is not good
but slowness can be a result of sql running process at time of shipping  (see sql process) that can result blocking\locks
4, collect info about what is going on - than analyze it
...
0
 
JaffaKREEAuthor Commented:
Thanks for the input.  I'm going to run the sql profiler and save the traces in half-hour increments, since the shipped transaction logs are based on 30 minutes.

Are there some good sites that have examples of gathered trace data and its interpretation ?
0
Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

 
JaffaKREEAuthor Commented:
I ran a traces against the database between 2 and 2:30, and 2:30 and 3PM.

the 2-2PM log file was 1.5 MB, the 2:30-3PM log file was 203MB, well over a hundred times larger.  The trace files, though, were approximately the same size.

I used the 'standard trace' template file and ran that trace against the database.  Most of the events I see are select statements, being executed by application users.  I didn't really see anything that made me think it was the culprit, but there's thousands of rows.  very few insert/update/deletes.

Ideally, I would like to have the trace take ONLY events that are writing to the transaction log of this particular database.  Can that be specified ?

Thanks,
JK

0
 
arbertCommented:
"Ideally, I would like to have the trace take ONLY events that are writing to the transaction log of this particular database.  Can that be specified ?"

Not really, you would have to look at activity just for that database....Were there any DBCC jobs or SQL Agent transactions?  Also, like I asked above, "Also, when do full backups take place (or do you run them)?"
0
 
JaffaKREEAuthor Commented:
Full backups occur once, daily, at 3AM.

The only DBCC jobs I see were between 2:29:30 and 2:30:00, DBCC SHOW_STATISTICS jobs.  They seem to be surrounded by some exec [various stored procedures] that are 3rd-party application procs.  

I'll probably run a longer trace and see if this pattern precedes each of the large log creations.

Other suggestions ?

Thanks,
JK
0
 
JaffaKREEAuthor Commented:
There is a "Transaction Log" trace item.  It seems like this should have a # of writes associated with it, since it refers to a transaction log hit, but the field is always blank.  Am I not understanding it correctly ?

Thanks,
JK
0
 
JaffaKREEAuthor Commented:
Using the DBCC LOG command, I carefully monitored the size of the database logs.

4:27 PM - 32,133
4:34 PM - 33,169
4:36 PM - 33,500
4:42 PM - 35,143
4:45 PM - 2,008,661

Seems like between 4:42 and 4:45, the log gets PACKED with the following row:

OPERATION= LOP_SHRINK_NOOP          CONTEXT=LCX_NULL

I expect to see this same behavior around 6:42, 8:42, and so forth. The question is, what is LOP_SHRINK_NOOP ?  Possibly an autoshrink or autogrow ?

   


0
 
arbertCommented:
What parms did you use with the DBCC LOG (3,4,and -1 will return the most info).  As far as the output, it's not documented and it's really difficult to find out what the information means--can you cross reference it with the profiler trace at the same time--the profiler trace should pickup autoshrink and autogrow (make sure you trace Warnings)....
0
 
JaffaKREEAuthor Commented:
I used 4.  The description field is blank for those operations, though (It's blank for mostly everything).  I'll run the trace and hopefully pick up something.  Thanks.

0
 
JaffaKREEAuthor Commented:
There are some events right around the time when the log jumped ( It was about 8:45:57 as far as I could make out) that don't appear anywhere else in the trace.    The EventClass is "Hash warning", subclass "Hash recursion".  It happens 4 times in a row with a couple of Single pass sorts around it, then again a few seconds later.  Do you know what this is indicative of ?  The TextData field is blank.  Application Name is .Net SqlClient Data Provider.
0
 
arbertCommented:
So there is a .NET application running somewhere that is using a query that causes the use of hashing in a query.  Did it have a computer/host attached to it so you can tell where it's running from?
0
 
JaffaKREEAuthor Commented:
I think the .NET thing was coincedental.   I'll close this and open a new question to see if anyone happens to know what that log operation is.
0

Featured Post

The 14th Annual Expert Award Winners

The results are in! Meet the top members of our 2017 Expert Awards. Congratulations to all who qualified!

  • 8
  • 5
Tackle projects and never again get stuck behind a technical roadblock.
Join Now