[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
?
Solved

Diagnosing increase in SQL process running time

Posted on 2009-02-22
6
Medium Priority
?
662 Views
Last Modified: 2012-05-06
a SQL process has become very slow (takes twice as long to run) - What steps can be taken to diagnose why there has been an increase in run time and what could potentially be the cause?
0
Comment
Question by:Awk2009
  • 2
4 Comments
 
LVL 75

Expert Comment

by:Anthony Perkins
ID: 23705235
>>What steps can be taken to diagnose why there has been an increase in run time and what could potentially be the cause?<<
I am afraid no one has a crystal ball here.  This is like calling up you auto mechanic and telling him you car is running slower.  You are going to have to provide far more detail that that.  
0
 
LVL 1

Expert Comment

by:j8eyd
ID: 23705243

That's a pretty big question and one I doubt you'll get much response to as the whole subject of performance troubleshooting is vast.

When you say "a sql process" what do you mean?
Have you ran profiler to see what is going on on your system?
Have you checked the windows environment for cpu, ram, disk usage?
Event logs and sql logs?

If you can provide more info I'm sure we'll help you get there.

Joe
0
 
LVL 19

Accepted Solution

by:
grant300 earned 2000 total points
ID: 23705515
Well, you have not given us much to go on.  I will try to give you some generic ideas but it would be nice to have a bit more information.  Can you tell us the database version and perhaps provide a sample of the SQL code that is running slowly?

I will assume Sybase ASE since you have made a tie in with MS SQL Server, and figure it is v12.5.x for right now.

In general when one or more queries get slower over time, it is the result of a short list of things:
 - increased data volumes.  Your tables have just plain gotten bigger
 - statistic that have not been kept current
 - table page chain disorganization and/or fragmentation
 - stored procedures that have not been recompiled as the data volumes have increased or the statistics have been updated.
 - increased load on the system in terms of number of users and/or query rates

First, pick one or more queries that you know have gotten slower, run them and keep track of the timings.  Pick queries that are repeatable and do it at a time of day when you can predict and repeat the load on the system.  The best time is when little or nothing else is going on in the system so tell the wife now you are going to be working some late eveinings.  If your problems are only with Updates and Inserts, you will have a harder time benchmarking since you will want to run the same SQL under the same conditions repeatedly.  For extra credit, run your benchmark queries and capture the query plan as a baseline.

The very next thing to do is run an UPDATE INDEX STATISTICS followed by an sp_recompile on all of the tables in your database.  If you don't already have a stored procedure to do that automatically, write on or find one and download it.  It is fairly easy to create a proc that reads the sysobjects table, finds just the user tables, and runs the update index statistics and sp_recompile against each one.  Unless your database is very large and/or it has outgrown the hardware, this should not take more than a few hours to run.

Now rerun your benchmark queries and see what you get.  Unless you have been running an Update Stats job on a regular basis (at least weekly) you will probably see an improvement.  It may not fix the problem but it rarely hurts things.

There is always the possibility that Update Statistics will fix things and life will be good.  You just have to schedule a weekly batch job to run and Update Stats / sp_recompile and your good to go.  The other extreme is that you are now worse off in which case you can bet that a large part of the problem is query tuning and you can starat looking at query plans, index usage, etc.  I am not going write the 100 page manifesto on tuning here but we can help you with a specific query if you post it and the supporting information, e.g. table cardinalities and available indexes.

How much has your data volume grown over time?  If it has double or tripled since the application was written, you can be looking at one or more symptoms of that expansion.  As tables get larger, queries that could sneak by with a table scan here or there or which fit in the cache when they were smaller, begin to unmask underlying problems.  These could be missing or poorly concieved indexes, quereis that were hinted or force-planned for much smaller data sets, or just plain poor data model design.

Increased data volumes may also simply be stressing the available hardware.  32 bit systems with a couple of gigabytes of default cache can begin to strain as the tables and finally the indexes get so large that they won't fit in memory any more.  The system becomes more and more I/O bound as a result.  Of course, this may all be occuring prematurely because either the buffer caching was never changed from the installation defaults and/or the hardware platform has a bad I/O bottleneck to begin with.  You would be surprised at how many systems I have seen where the data server is configured to use 2GB of RAM but the default buffer cache is still sized at 48MB

You would also be surprised how many really bad hardware configurations people try to run databases on.  If I had a dollar for every 1U Dell server with three disk drives in a RAID-5 configuration that someone was having trouble running a database on, I could retire to my well financed hobbies ;-)

If your data volume has not grown that much but your system is update intensive, you may have one or more badly fragmented tables.  You can do a REORG on them and see if that helps.  Depending on the nature of the application, you may find you have to make REORG a regular part of your routine maintenance routine.

I had one client system that thrashed the two largest tables in the system very hard.  In addtion to batch loads and updates of 10s of thousands of records at a time, there was also in interactive application that could generate a single transaction that modified 10s of thousands of rows at a time.  They had gotten to the point where they were updating statistics 4 times a day because it seemed to help for a little while.  The real problem was that the tables were hopelessly fragmented.  Regualr REORGs helped immensely but the ultimate solution was to put a clustered index on the table.  We got by with it without too much performance penilty because the data was inserted in and manipulated in naturally occuring groups;; the first field in the primary key.  We put the clustered index on the identity column which had the effect of keeping the related data well organized yet did not require the nasty overhead traditionally associated with cluster index maintenance.

Even if you are doing everything else right, if your system uses stored procedures that are not being recompiled when the statistics are updated, you are stuck with the old query plans.  Even in cases where the optimizer would be smart enough to compensate for the increased table cardinality, if you don't recompile, you never get a new query plan.

Finally, you need to consider that there may simply be more load on the system than there has been in the past.  Particularly in cases where the hardware you are running on is marginally configured, the database can be sensitive to an increase in traffic.  Going from a load where only a couple of operations are being performed simultaneously to an environment where there are turly a number of simultaneous users all pressing the <enter> key at the same time can have a startlying effect on performance.  Again, a lack of memory, poorly configured cache, I/O bottlenecks, or just a plain old fashioned shortage of CPU can become painfully apparent.

This gives you a few things to think about and look at.  With more detailed information about your situation, we can, of course, give you more specific advice.   Depending on what is going on, we may talk about sp_sysmon or MDA tables or some of the more advanced topics in database tuning but for now, stick with the basics.

Regards,
Bill
0
 
LVL 75

Expert Comment

by:Anthony Perkins
ID: 24025322
I think this is a no-brainer:  Award points to grant300.
0

Featured Post

Free Tool: Subnet Calculator

The subnet calculator helps you design networks by taking an IP address and network mask and returning information such as network, broadcast address, and host range.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Why is this different from all of the other step by step guides?  Because I make a living as a DBA and not as a writer and I lived through this experience. Defining the name: When I talk to people they say different names on this subject stuff l…
This month, Experts Exchange sat down with resident SQL expert, Jim Horn, for an in-depth look into the makings of a successful career in SQL.
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Viewers will learn how to use the UPDATE and DELETE statements to change or remove existing data from their tables. Make a table: Update a specific column given a specific row using the UPDATE statement: Remove a set of values using the DELETE s…
Suggested Courses

834 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question