Link to home
Start Free TrialLog in
Avatar of peturgretars
peturgretarsFlag for Finland

asked on

Performance problem in SQL Server 2005, what to do?

Hi,

I am investigating one messed up system here it seems where the SQL Server 2005 64bit database machine has 14 GB of memory, 4 dual cores CPU's and SAN disk setup configured with RAID5. The database is around 52 GB in size. The CPU usage seems to be considerable low on average. Database has one data file and one log file.  

I am not the most knowledgeable person when it comes to SQL Server 2005 so any advice/recommendation is highly appreciated.

I am suspecting that the slowness in the system could be caused mainly by internal and external fragmentation of tables/indexes. Executing the following query I see that lots of tables/indexes have from 60%-98% average fragmentation.

SELECT object_name(i.object_id) AS tablename,
       i.name                   AS tableindexname,
       phystat.avg_fragmentation_in_percent
FROM   sys.dm_db_index_physical_stats(db_id(),NULL,NULL,NULL,'DETAILED') phystat
       INNER JOIN sys.indexes i
         ON i.object_id = phystat.object_id
            AND i.index_id = phystat.index_id
WHERE  phystat.avg_fragmentation_in_percent > 30

I ran the SQL Profiler for few hours while database was performing at its worst with large number of users concurrently connected. Then we analyzed the trace file with the Database Engine Tuning Advisor and there were hardly any recommendation.  Taking some of the individual queries from the trace file showing bad response time and running them against the Database Engine Tuning Advisor I got recommendation of about 90% improvement in response time, but after applying those indexes and statistics the response time only improved by about 10% or less.

The disk fragmentation shows also 98% fragmented.

I would appreciate if someone would help me come up with an action plan or better way of investigating:

-      Should we rebuild or reorganize indexes? If rebuild and reorganize, which first?
-      Since the server has 4 CPUs should we recreate the database with 4 data file, or does that only apply when you have 4 disks (http://blogs.technet.com/mat_stephen/archive/2005/02/02/365370.aspx)?
-      If database would be recreated with 4 data files and 4 log files then what is the best way to move data from old database to new one?
-      Knowing that statistics have never been gathered for the database, is there a rule of thumb how statistics should be gathered and how often then?
-      Is there another better way to tackle/analyze a performance problem in SQL Server 2005 assuming that the application should perform fine with all its indexes in place and the number of concurrent users?
-      Does anyone have a good script that could prove that the fragmentation could be causing a lot of slowness in database, if then that is the case?

Best regards and thanks in advance!
Pesi
Avatar of momi_sabag
momi_sabag
Flag of United States of America image

1) yes, you can rebuild indexes online so i don't see a reason why not to (if it's true that your cpu is not overloaded)
2) i don't think that creating 4 files on the same disk will benefit you
3)
4) that is not true, if auto statistics is set to on, then statistics are gathered automatically. otherwise you can turn it on and sql server will collect statistics for you
5) the is a dmv i like called sys.dm_exec_query_stats (or something like it) which shows for every sql statement how much resources it consumes. I would use that dmv (read about it in the books online) to see the top 10 consuming queries and start with them
6) check out dm_index_operational_stats
Avatar of Mark Wills
Well, rebuilding indexes online does need the Enterprise version I think...

If there is that much fragmentation, then yes it is going to cause a problem. Surprised to see that a SAN is reporting so much fragmentation, part of the benefit of having a SAN is significantly advanced disk management.

Raid 5 is not the best for write performance, and if configured as the only "logical" array of 4 disks for that server, even on San, then doing a bit more competing for IO than should be neccessary. Making seperate tempdb's per processor is really per physical processor, and given your disk scenario, would not be doing it.

What you can do, it use the Maintenance plans... Set up at least three

1) Full (nightly) backup
2) Transaction logs (quarter or half or hourly) backups
3) Maintenance - Index rebuilds, Statistics,  Reorg pages, check database sizing - probably weekly when there is an opportunity to build indexes - it can take a while.

With a database your size, would suggest using the maintenance plans more so than auto-stats, try to beat the beast into your control, running the way you want it to...

Before you tackle internal SQL fragmentation, need to address that disk fragmentation. And also make sure that you have caching set on for the SAN / Raid array. The you can tackle index fragmentation.

As a general tool, you will need to gather some statistics with perfmon. Probably the biggies will be Cache Hit Ratio : SQL Plans should be better that 70% and Buffer Cache Hit Ratio should be better than 95% and page life expectancy should be better than 300, then there are the disk queue lengths and processor queue lengths. So there are a few stats worth gathering...

Have a read of : http://technet.microsoft.com/en-au/library/cc966540.aspx

After reading the above, you might see that this is potentially a big, big topic, so is there something more specific you would like to know about ?
You might want to look at contig.exe; free download from sysinternals.com. Contig.exe makes a file contiguous, so you can defragment a .mdf, etc.
Avatar of peturgretars

ASKER

Thanks to all comments so far.

I will have a thorough read through all those answers. Hopefully I can come up with a good action plan based on those. System is just so slow that some applications are crashing from time to time because of bad performance in database and I am trying to figure out where and what is the best way to start to tune the database. I need to get some improved performance soon. How can I find the tables/indexes that should be rebuild first for example or is there something else you would start with. I have already taken one of our biggest table and rebuild its indexes, it took some time and hasn't shown significant change.

Appreciate to hear how and where you would start tuning a poor database performance in a 24/7 system knowing that you can't change the code/queries.


Well, would start with indexes and statistics, and will be hard to physically defrag if you cannot take it offline - it will be locked.

Would look into some of the more popular queries if at all possible, and try to indentify any slow running queries, or heavily locking queries... Then using those as a basis, look at : http://msdn.microsoft.com/en-us/library/ms191227(SQL.90).aspx and walkthrough the first topic area (the checklist)...

But you will need those statistics - they will help identify IO bottlenecks versus CPU bottlenecks and Memory.

Don't forget, it could also be hardware / network configuration issues as well (even things like byte-level virus checking can kill a machine).

Consider the following chart, and also have a read of the more detailed Word documents from Microsoft :

Performance Waits and Queues : http://download.microsoft.com/download/4/7/a/47a548b9-249e-484c-abd7-29f31282b04d/Performance_Tuning_Waits_Queues.doc
Trouble Shooting Performance : http://download.microsoft.com/download/1/3/4/134644fd-05ad-4ee8-8b5a-0aed1c18a31e/TShootPerfProbs.doc

performance-gains.jpg
Few things I would like to get clarified

1. As this is a multi processor (4 x Intel(R) Xeon(R) X5355 @ 2.66GHz) with server and from what I have read on the internet I understand that it would be best to create a new database with as many data files (all  the same size) as you have processors and transfer the data from the old database into this new one. See for example http://blogs.technet.com/mat_stephen/archive/2005/02/02/365325.aspx

Momi Sabag, you say that this won't benefit me. Why is that? And then what about the tempdb, does the same thing apply there as well?

2. I should have mentioned that the first things we noticed was that database autogrowth was configured as "By 1 MB, unrestricted growth"  Isn't this a sure indication that the database could be very heavily fragmented as we have seen already and could it be that rebuilding indexes isn't  really enough to fix that situation?  

3. If rebuilding indexes isn't enough, what do you recommend to do? Is there a way to rebuild the data inside the datafile?
1. yes that is a school of thought to get some parallelism happening with the processors, with a view of multi-tasking the IO. But it really does depend on so many things, and really getting towards the last tactical move, there are a few considerations like hyperthreading to take into account, and playing with the affinity mask. And really are you seeing big CPU problems or any one CPU being thrashed ? Do you have IO bandwidth to manage more computing power throwing itself at the Array ?

2. Database and Transaction logs should be examined and adjusted for optimal size to take into account "normal" organic growth. There will always be the odd occasion where some significant event with through out the size, but they are the exception and should be treated as such. A reasonable growth is 10 percent - assuming you do have disk.

3. de-fragmenting heap data is challenging. Heap tables are those without indexes. Easiest way to defragment is to add an identity column as a clustered primary key and that will keep it "clean" without having to change too much of the rest of the database (ie should be all but transparent - might need to change some insert statements if you have an insert statement without specifying column names).

You must gather some statistics to ascertain where the real problem is.

It could end up being the raid controller queing Disk IO - are there any error messages on the SAN / Raid logs ?

Attached is a result from the following query and 9th (I) column shows the avg_fragmentation_in_percent:

---These indexes should be rebuilt, not reorganized
--- Alter Index ... Rebuild
DECLARE @dbname VARCHAR(20)
SET @dbname = ' PRODDB'

select *
from sys.dm_db_index_physical_stats (db_id(@dbname),null,null,null,'DETAILED')
where avg_page_space_used_in_percent  > 50

Would be interesting to hear your opinion on this result.
Thanks so much!


Fragmentation.xls
ASKER CERTIFIED SOLUTION
Avatar of Mark Wills
Mark Wills
Flag of Australia image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks so much for these excellent answers you provided!