?
Solved

Database performance not so randomly slow all of a sudden

Posted on 2007-08-03
7
Medium Priority
?
494 Views
Last Modified: 2013-11-05
Experts, I am in need of help afer pulling my hair out for 2 days solid on this.

Environment:
ASP Web Application on dedicated hosted server with Operating System Windows Server 2003 SE 64 Bit and SQL 2005 Express as the DB.  Server is Single Core AMD Athlon 64 3500+  2.2 with 1gig RAM.  Although there is a 20gig System drive and in addition 2 x 160 GB (RAID 1) drives for data, I now notice that the RAID drives are empty despite the fact the system was supposed to be configured with DB data on the RAID drives.  All this means OS, ASP application, DB data (150mb) and DB logs on one 20gig system drive.

Problem:
I am getting complaints from the users of the app (at various locations) of speed problems only when accessing particular parts of the app (parts which I know rely on the biggest table in the DB with just under 200k records).  It started a couple of days ago but the problem is I can't 'experience' the same speed problems.  The perfmon counters for CPU, disk and memory all seem fine and whenever I access the app as a user I have no problems at all....it's as fast as ever.

I had been doing some slighty more unsual work on the DB recently including scheduling backups of data and logs then FTP to other locations, and also table dumps to CSV for user download.  Some of this involved messing around with sp_configure values but I have undone all of this work and no improvment.

It's true I am hardly an expert at this stuff but I am learning quickly, here are some of the things I have tried...

I have checked indexes and feel confident they cannot be the problem, I did make some changes and infact CPU utilissation went down but the users still experienced slow service.

I cannot run SQL profiler directly on the machine as it is Express edition, I have managed to copy the 'blackbox' trace results onto my dev machine and poured over the  trace logs....it's clear I need to put some attention in here as I have Sort Errors, Hash Errors and Missing Join Predicates on what must be the most popular series of stored procedures in the app....problem is I am stuck here....I cannot figure out how to run custom traces on the server so I can then get the trace file onto my dev machine for analysis.

As for perfmon counters, I do seem to have a lot fo lock requests and index reads for the size of the DB.
There are SQL user and SQL info errors visible in perfmon but I cannot get any more detail on these I guess? without sorting the trace.

Please help, I am sure if I could just narrow the possibilites then I can fix from there.

Thanks
0
Comment
Question by:davie455
  • 3
  • 2
  • 2
7 Comments
 
LVL 51

Assisted Solution

by:Ted Bouskill
Ted Bouskill earned 400 total points
ID: 19627827
Ah, database design is a black art.  You are describing a common problem whereby a developer says it's fine when he is working on the system by himself but as the number of users increases the system gets expontentially slower.  Unfortunately there are way to many ways to go wrong.

First off, by default SQL 2005 installs new databases in C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data
When you create databases you have to specify a different drive (like your RAID drive) or reconfigure SQL to use a different default folder for creating databases.  For example, when you install it you can change the location.

www.sql-server-performance.com has many good articles on analyzing or building good quality DB applications.  I suggest you start there.
0
 
LVL 27

Accepted Solution

by:
ptjcb earned 1600 total points
ID: 19628125
One of the first steps is to create benchmarks. You must have a reliable guide to measure. Users will complain that "it's running slow", but you are chasing your tail, especially since you "cannot experience the speed problems." Perception is all.

Are these users remote? How do they access the system? Do you access the system in the same way? Do you access the system as a user or as an administrator? Do you have an account with the same rights and privileges? What is your QA strategy?

>> know rely on the biggest table in the DB with just under 200k records
That's nice, but in that particular part of the app do you:
use queries with many joins or complicated WHERE statements?
Are all of the tables indexed correctly? Are the users updating/inserting? A
re they selecting? What are the transaction levels?
Are you running inline queries or stored procedures?
Are you using views or base tables?
How large is the tempDB?

>>I have checked indexes and feel confident they cannot be the problem, I did make some changes and infact CPU utilissation went down but the users still experienced slow service.

How many indexes per table? How many indexes in the heavily-transacted tables? Are you using the same database for transactions and reports?

Is this a production system? You have found one of the many reasons that you should invest in another version of SQL Server instead of making do with SQL Express. SQL Profiler is one of the best tools available to check the health of a database.

1GB of RAM is a good test but poor production machine.

Database design, as tedbilly says, is a black art. Managing indexes is also a black art. And managing indexes is a constant task, it is not something you do once and forget. How often do you rebuild indexes? How often do you rebuild indexes on the heavily-transacted tables?










0
 

Author Comment

by:davie455
ID: 19628268
thanks for the comments so far, lot's to think about here, but by way of reply...

>One of the first steps is to create benchmarks. You must have a reliable guide to measure. Users will >complain that "it's running slow", but you are chasing your tail, especially since you "cannot >experience the speed problems." Perception is all.

Agree with your view. The users in this case have 1-3 years experience of the application at it's different stages, this combined with certain 'red flags' (as in parts of the application which they claim are struggling are the parts that I worry about anyway)....convince me there is a problem.

>Are these users remote? How do they access the system? Do you access the system in the same >way? Do you access the system as a user or as an administrator? Do you have an account with the >same rights and privileges? What is your QA strategy?

Yes all remote, as am I.  We all access the private web interface via Broadband only difference being they have the save service provider for all sites and I have a different one, although it does not seem likely (unfortunatley) that this is the problem.  The web app connects to the db using the same credentials, policy is enforced in the web app side.

>> know rely on the biggest table in the DB with just under 200k records
>That's nice, but in that particular part of the app do you:
>use queries with many joins or complicated WHERE statements?
   Yes I'll prepare to post examples

>Are all of the tables indexed correctly?
   I believe so, the primary key has a cluster, and other regular criteria columns have indexes.  I did earlier add an index which combined columns which are all commonly search together, if that makes sense.

>Are the users updating/inserting?re they selecting?
Both, but not as frequently as they are selecting.  all operations are affected apparently.

>What are the transaction levels?
I'm not sure which measure will be most helpful here.

>Are you running inline queries or stored procedures?
A mixture, but all the heavy queries are SP's, although possibly not of the greatest design.

>Are you using views or base tables?  
I have added some views recently but they tend to be used for simple inline queries, the stored procedures access base tables in the main.

>How large is the tempDB?
less than 20gb

>>I have checked indexes and feel confident they cannot be the problem, I did make some changes and infact CPU utilissation went down but the users still experienced slow service.

>How many indexes per table? How many indexes in the heavily-transacted tables?
the main 'big' table has about 20 columns, 6 indexs  usually no more than 3 in the others.

>Are you using the same database for transactions and reports?
Yes I am...could this be problematic then?


>Is this a production system? You have found one of the many reasons that you should invest in >another version of SQL Server instead of making do with SQL Express. SQL Profiler is one of the best >tools available to check the health of a database.
>1GB of RAM is a good test but poor production machine.

I agree with these points but at this stage I'm talking about less than 10 client machines simul and never usually more than 2 concurrent on what is I thought a small database.  Full versions seemed overkill at thsi stage....although I am starting to understand the downside of no profiler.

>Database design, as tedbilly says, is a black art. Managing indexes is also a black art. And managing >indexes is a constant task, it is not something you do once and forget. How often do you rebuild >indexes? How often do you rebuild indexes on the heavily-transacted tables?

This server has been in use since early june when we migrated, I have revied index's once since then and then again earlier today.

0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
LVL 27

Assisted Solution

by:ptjcb
ptjcb earned 1600 total points
ID: 19628443
Cool.

tempDB is less than 20GB? The drive you are using is 20GB, yes?

>>OS, ASP application, DB data (150mb) and DB logs on one 20gig system drive

The tempDB is used as the scratch database for queries and stored procedures. For example, let's say that you have a tempDB that started at 100MB, with unlimited growth at 10%. If your users are continually running queries that use the tempDB, then it is constantly growing. How often do you reboot the server or stop and restart SQL Server (because the tempDB is rebuild each time SQL starts)?

If everything is crammed onto this drive, and you have a large page file (because you have only 1GB of RAM), you could have a computer that is running out of room and constantly thrashing. If you moved your databases to the RAID you may have solved your problem, for awhile.

You should set up a schedule to reboot the server at certain intervals.

You should rebuild indexes. Start with once a week, or after you have a busy day. For example, if the users dump a lot of information into the tables for the end of the month, you should rebuild the indexes that night.

Having a transaction and reporting database can cause all of kinds of problems. For example, if one user decides to run a large report, and other users are updating the same tables then you are going to see many locking problems. Reports should be moved to a reporting database, where they can run without disturbing production.



0
 

Author Comment

by:davie455
ID: 19628664
Thanks again for your input

Sorry sorry sorry, as If it wasn;t confusing enough I meant 20mb on the tempDB....however for accuracy I have just checked again and it is 9.75mb, with space for 7mb more.   As I was checking I thought I would check the main db again, wow...it's size is over 450mb (and yet I'm sure the file size was only about 150mb) and more concerning space available is 35mb.  Now on the limited trace logs I could analyse ther were 'data file auto grow' entries which even I know should be avoided....but they all look like they relate to tempDB or msdb.

The server has been rebooted a couple of times in the last 36 hours in a clutching at straws moment.  I rebooted the old one everynight but I read alot about SQL sever caching execution plans and other such things and probably stupidly thought all the experience SQL server gets of running itself well gets lost on a reboot.

I can believe a large pagefile, although freespace on the drive is still 8gb so nearly half.  I would like to move the db's to the RAID drives  -  i shold point out that since first post I know realise I have two 160 gig drives that are RAID 1 and look like this..

Phys.Drive 0      C:\ System 20gb (simple)                 AND           Data [no letter] 130gb (mirrored)
Phys.Drive 1       Data [no letter] 130gb (mirrored)     AND           Unpartioned 20Gb


I'm not sure how to go about moving the data over to the mirrored drives...I have no RAID experience and clearly don't want to risk mistakes.

I think I will reschedule reboots again and I believe you can script rebuilding of indexes?

>Having a transaction and reporting database can cause all of kinds of problems. For example, if one >user decides to run a large report, and other users are updating the same tables then you are going to >see many locking problems. Reports should be moved to a reporting database, where they can run >without disturbing production.

This I think is good advice but I am concerned about ensuring upto date reports...in fact....most of the reporting involves simple select statements on simple tables...there are some others though which I could picture an improvement on if move to a separate db.

0
 
LVL 51

Assisted Solution

by:Ted Bouskill
Ted Bouskill earned 400 total points
ID: 19629913
First off you need to format the Data partition.  Right click on your My Computer and then select Manage.  It will have a Storage branch that has a Disk Management branch.  Use that tool to prepare the partition.

Moving the databases is achieved by detaching them.  Then copying them to the new location, then reattaching them.   Sorry but you will have to take down your website while doing this.
0
 

Author Comment

by:davie455
ID: 19633783
Okay, things under control again. some good generic advide offered guys which shaped my thinking as I went about easing the problem.  I could not be entirley sure but I think two actions had the greatest impact...

Although I was able to rebuild indexes throughout the day, *reorganising* the indexes at the end of the day seemed to really help as did increasing the DB and log files sizes to *prevent auto-grow*.

I still need to move the DB data & logs to the mirrored drives and also need to work even more on my indexing and track down some SQL User Errors/Sec somehow and full-scans, but i have it seems plenty of breathing space again, at least for the time being.

Thanks
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Lync server 2013 or Skype for business Backup Service Error ID 4049 – After File Share Migration
New style of hardware planning for Microsoft Exchange server.
In this video, Percona Director of Solution Engineering Jon Tobin discusses the function and features of Percona Server for MongoDB. How Percona can help Percona can help you determine if Percona Server for MongoDB is the right solution for …
In this video, Percona Solutions Engineer Barrett Chambers discusses some of the basic syntax differences between MySQL and MongoDB. To learn more check out our webinar on MongoDB administration for MySQL DBA: https://www.percona.com/resources/we…

850 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question