Solved

SQL Performance Question

Posted on 2013-06-10
15
267 Views
Last Modified: 2013-06-11
I was recently brought in to evaluate slow performance of an ERP solution. They are running SQL Server 2008 R2. Their database size is about 90GB and the server has 16GB RAM. It was immediately apparent that their server is grossly underpowered and the prime suspect is the drive array. No one knows exactly what drives are in there and how they are configured. Plus one of the lights on the array is blinking red. Given this, we have them ordering a new server that is built to the specs of the ERP solution. My question right now is what can I do with the old server to increase its speed while we wait for the new server? They just rebooted the server on Friday and by Monday morning the tempdb has grown to over 3GB and I cannot spot any process they perform that would require such growth. We will be runninging DBCC CHECKDB and CHECKCATALOG tonight and perhaps REINDEX. What else would be the top tuning procedures for SQL I should look at to get this server to be marginally acceptable? That blinking red light is most concerning but there is nothing I can do about that.
0
Comment
Question by:rwheeler23
  • 7
  • 4
  • 4
15 Comments
 
LVL 9

Expert Comment

by:MattSQL
ID: 39236691
Yep, given a flashing red light I'd want to ensure that all databases pass a clean DBCC CHECKDB. Follow this with a full backup.

-Run reindex against the database. If you have a limited window you can prioritise the largest and most fragmented indexes.
-If you rebuild an index then statistics will be updated automatically - if you don't then an update statistics would be a good move.
-Are there any other processes running on the server? What is max server memory set to?
-How many CPU's are there?
-How many datafiles does tempdb have? Where is tempdb located? If we have heavy tempdb use then relocating this database to a separate drive could give a quick performance boost.
0
 

Author Comment

by:rwheeler23
ID: 39237511
CHECKDB took over an hour to run and returned no inconsisitencies. This servers runs only SQL with SSRS. It actually has 32 GB and SQL was set to only use 16GB. I changed that to 29GB. The tempdb.mdf was 6GB and the log file for the database was 5GB. I reset the SQL service so the tempdb shrunk down to next to nothing. I also shrunk the log file of the database down to 20MB. Whoever created this server just made one big E drive. It has 14TB of space and only 100GB is being used. The drives appear to be 7500rpm SATA drives. Task Manager reports only 4 CPU's and it is disk I/O that is flat lined all day long. I could try to move tempdb to the C drive as there is just the mdf and one ldf. I will see how it runs today and decide.
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39238769
>> The tempdb.mdf was 6GB and the log file for the database was 5GB. <<

That could be normal for the processing being done.


>> I reset the SQL service so the tempdb shrunk down to next to nothing. I also shrunk the log file of the database down to 20MB. <<

BAD ideas.  They will just grow again, and less efficiently than if you simply pre-allocated them to the proper size.


Given that you have (literally) terabytes of unused disk space, why are you overly-shrinking tempdb??
0
 

Author Comment

by:rwheeler23
ID: 39239091
The databases I work with very rarely expand so to see them of that size was highly irregular. These people are manually entering data. So now I have a starting point where I know the size of the log files. If in a week they have not grown I know something unusual was going on and if I see they grow in a burst I can isolate the day and then investigate what may be causing the expansion. Since no one there will admit to what may be going on I needed to start somewhere.
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39239178
tempdb is not used for permanent data, but it is used for lots of other things, including processing ORDER BY and GROUP BY clauses.  Therefore, tempdb can grow beyond the size you'd normally expect it to, and even beyond the size of the original data, since many people may all be doing different ORDER BY, groupings, etc., all at the same time.
0
 

Author Comment

by:rwheeler23
ID: 39239230
That is true. I was told on Friday that the server had gotten rebooted that night so the tempdb would not have grown to over 6GB on just one day's work. Now if it did, it should have done so again by today and now I have a benchmark to which to compare. This server has the worst performance I have ever seen for this ERP solution. I watched them print an invoice out yesterday and it took 2 minutes. I, and they, will be glad when that new server comes online.
0
 
LVL 69

Accepted Solution

by:
ScottPletcher earned 250 total points
ID: 39239246
1) Is IFI turned on?

2) Be sure to pre-allocate sufficient log space on all dbs, or you will get significant waits as log files are dynamically extended.
0
[Webinar] Disaster Recovery and Cloud Management

Learn from Unigma and CloudBerry industry veterans which providers are best for certain use cases and how to lower cloud costs, how to grow your Managed Services practice in IaaS clouds, and how to utilize public cloud for Disaster Recovery

 
LVL 9

Assisted Solution

by:MattSQL
MattSQL earned 250 total points
ID: 39239381
I understand your thinking with tempdb and wanting to isolate times when issues occur. But do so with the knowledge that if tempdb needs to grow it will - and this file allocation takes time and resources. Instant File Initialization will speed up data file growth but log files can't use this mechanism.

The thing with tempdb - if this is indeed the bottleneck - is not so much about space but about the fact that it is on the same drive as the other data and log files. Best practise would be to have it isolated on it's own drive - but I'm not sure that I'd put it on the C drive. That would be asking for trouble if it suddenly grows...

For a server that hosts a 90GB database, 6GB tempdb seems very modest to me.

Good that the DBCCs returned clean.

Have you done any index rebuilds? Defrags? Update statistics? This might be a quick win?

If you want to look at fragmentation levels you could look at:

http://msdn.microsoft.com/en-us/library/ms188917(v=sql.90).aspx

But be aware, this can take a while on large databases, as it reads pages into the buffer pool it can cause some cache churn. Run during a quiet time.
0
 

Author Comment

by:rwheeler23
ID: 39239391
Part of what I notice was that the Network Service account was used to start SQL. Once that new server gets here I will have them create a service account and then  under Windows User Rights Assignment assign the “Perform Volume Maintenance Tasks”.
Good point about pre-allocating space, I had forgotten about that one.
0
 
LVL 9

Expert Comment

by:MattSQL
ID: 39239400
That's the one. And if you have Max Server Memory set appropriately (and on the surface your setting seems reasonable) I would also give the service account Lock Pages in Memory
0
 

Author Comment

by:rwheeler23
ID: 39239456
The Lock Pages in Memory was the other property to set. I will be checking the growth as soon as they will allow me to get back onto the server. I am reluctant to rebuild anything on this server given its sluggishness. They run 16 hours a day and it may not be finished in time to start the next day. I will ask however about starting it on a Friday night.
0
 
LVL 9

Expert Comment

by:MattSQL
ID: 39239466
If you can't have outages for rebuilding indexes reorganising is a possibility. It's not as effective as a rebuild, but much lighter weight in terms of locks and blocking.

Another option is to just rebuild one or two of the bigger, more fragmented indexes per night during the week. Rather than try an rebuild everything in one window which will take several hours, spread the load across multiple smaller windows.
0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 39239510
If it's Enterprise Edition, you can do full rebuilds online.
0
 

Author Comment

by:rwheeler23
ID: 39239578
Another good idea. You folks are the best. I will run sys.dm_db_index_physical_stats
to check on the health of the tables and then decide what to do.
0
 

Author Closing Comment

by:rwheeler23
ID: 39239790
Great tips, all of them!
0

Featured Post

PRTG Network Monitor: Intuitive Network Monitoring

Network Monitoring is essential to ensure that computer systems and network devices are running. Use PRTG to monitor LANs, servers, websites, applications and devices, bandwidth, virtual environments, remote systems, IoT, and many more. PRTG is easy to set up & use.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Scheduling Jobs for Execution: 4 22
Mssql SQL query 14 45
sql query help 4 45
MS SQL Inner Join - Multiple Join Parameters 2 19
Audit has been really one of the more interesting, most useful, yet difficult to maintain topics in the history of SQL Server. In earlier versions of SQL people had very few options for auditing in SQL Server. It typically meant using SQL Trace …
There have been several questions about Large Transaction Log Files in SQL Server 2008, and how to get rid of them when disk space has become critical. This article will explain how to disable full recovery and implement simple recovery that carries…
Along with being a a promotional video for my three-day Annielytics Dashboard Seminor, this Micro Tutorial is an intro to Google Analytics API data.
Many functions in Excel can make decisions. The most simple of these is the IF function: it returns a value depending on whether a condition you describe is true or false. Once you get the hang of using the IF function, you will find it easier to us…

910 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

24 Experts available now in Live!

Get 1:1 Help Now