Solved

How best to allow SQL Server 2005 to utilise multiple processor cores?

Posted on 2008-06-20
11
1,955 Views
Last Modified: 2010-05-18
A client has migrated a database to a new multi-processor, multi-core server and is seeing a degradation of performance on intensive procedures that write to the database.
We think we have pinned this down to SQL Server using only one processor core at a time.
We think this is caused by there being only one datafile, and the architecture (Windows Server 2003) allowing only one thread to write to any one file at once, so we are effectively choking SQL Server to using one core at a time when writing.
This seems fairly fundamental in these days of multi-core, multi-processor systems but we can find surprisingly little information about it or others who have had this problem.
The database is large and complex so that creating a new database with multiple files and simply copying the data across is not a trivial task and before we embark on it I wanted to ask...
1) is our assumption correct, in that for SQL Server to utilise multiple cores/processors when writing, the database must be spread across (or utillise) multiple files.?
2) is there any guidance on planning the file use in order to maximise the core/procesor usage in future? (i.e. we want to do this only once and get maximum benefit)
3) what is the best way to perform the copying of the data so that
   a) it gets spread across the files and
   b) it is done as quiclly as possible
Thanks for any help on this.
0
Comment
Question by:keithfearnley
  • 6
  • 2
  • 2
  • +1
11 Comments
 
LVL 18

Expert Comment

by:jcoehoorn
Comment Utility
If it's a performance issue only while writing on a new system, I'd take a look at your disks first.  Especially if the new system uses raid differently than the old system.
0
 
LVL 1

Author Comment

by:keithfearnley
Comment Utility
Thanks jcoehoorn, the performance really does seem to be the processor - we have profiled and done tests. We can see the processor time maxing out at 16% (i.e. one of the eight processor cores available),  but if we run the same trial query concurrently on three copies of the database (i.e. each with one datafile) we utilise three processor-core's worth.  If we run three concurrent queries on the single database the processor again peaks and stays at 16% but the whole process takes three times the length of time.
Maybe this doesn't show what we think, but it looked to us like we were processor-bound rather than disk-bound and that using multiple files (although here we are using multiple databases too) allows multiple threads and hence more of our available processor power.
It could be that our interpretation is wrong - what would you suggest we do to check the disk usage, please?
0
 
LVL 18

Expert Comment

by:jcoehoorn
Comment Utility
What version of sql server are you running?  Some of them lower ones will limit you.
0
 
LVL 1

Author Comment

by:keithfearnley
Comment Utility
Hmm, it is SQL Server 2005 but I'm not exactly certain which edition - standard I think. I can confirm this on Monday.
Unless the limitation is on processors/cores per database then we can show that the instance of SQL Server will use more than one core/processor. I guess we could make a test database with multiple files and fill it and then query it to check that the system will use multiple cores/processors on a single database too, but I'd hoped to avoid the time that would take if possible.

Thanks again for the attention
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
Comment Utility
usually, when you see a performance degradation when changing from a single-CPU to a multi-CPU machine, it's BECAUSE by default, each query will try to use all the CPU. please note the difference between 2 queries running each on 1 of the 2 (or more) CPU vs 1 query running on all (ie 2) CPU at once.
the first is controlled by the CPU affinity mask, while the second is controlled by 2 parameters: the "query parallelism" and a threshold value telling as from which "estimated cost" sql server will use the number of processors specified by "query parallelism".
the CPU affinity tells which CPU can be used by SQL server globally, which usually should use all CPU (unless you have another cpu-intensive application can also be given specific CPUs...)

however, in most cases, the default value of 0 for parallelism (means use all CPU for each query) is making too much overhead for even simple queries to parallelize it and later merge the results, so I usually change that setting to 1 (requires restart of sql server)
0
How to improve team productivity

Quip adds documents, spreadsheets, and tasklists to your Slack experience
- Elevate ideas to Quip docs
- Share Quip docs in Slack
- Get notified of changes to your docs
- Available on iOS/Android/Desktop/Web
- Online/Offline

 
LVL 1

Author Comment

by:keithfearnley
Comment Utility
angellll,
  That sounds very useful, many thanks.
  Unfortunately I now won't be able to try that out until Monday, but I will do then and post back results.
Regards,
Keith
0
 
LVL 1

Author Comment

by:keithfearnley
Comment Utility
angelIII, having had a bit of time to check and think about this and to consult with a colleague, I'm not sure whether we have a misunderstanding. The database processes are effectively the only things running on the CPU and it is these queries running in isolation that are suffering timeouts (at the client), there is spare core/processor capacity available. We are effectiveoly in a test situation where we can run only this one query at once so there are no interaction effects.
These are older queries that are marked by the optimiser as being suitable for parallel running - whilst we could rewrite them to be more efficient, there are too many for this to be a short-term or cost-effective solution. The problem seems to be the opposite of what you have experienced... it isn't that SQL Server is parallelising and causing overhead in merging results, but that it seems only to be able to use one core and is maxing out that core. This seems (from googling around) to stem from the database being created with only one datafile.
Where you have had queries successfully using multiple cores, have you had multiple data files?
0
 
LVL 142

Accepted Solution

by:
Guy Hengel [angelIII / a3] earned 500 total points
Comment Utility
>Where you have had queries successfully using multiple cores, have you had multiple data files?
not always, but even if there were multiple data files (including dedicated filegroups for indexes), I noticed this behaviour.

let me clarify that for Datawarehouse systems, this will be different, because the amount of data processes will be large/huge for most queries, and having them processed with the parallel execution will be indeed best option.

for OLTP systems however, the option is simply "deadly". at least with the threshold setting at the default value of "5". you could increase that to 100, and leave the parallel query execution enabled if you have a mixed usage database.

0
 
LVL 1

Author Comment

by:keithfearnley
Comment Utility
OK guys, many apologies for leaving this so long (overlapping holidays and ensuing panic on return).
We have had no success with this following the suggestions given, but I am grateful for the time spent on it, particularly by Angelll.
I'd like to leave this open but don't want to clog up the board there, so I'm going to close it and award points in gratitude for the effort, despite us not having got to a conclusion - it's not Angelll's fault, after all.
0
 
LVL 1

Author Closing Comment

by:keithfearnley
Comment Utility
Thanks for your effort!
0
 

Expert Comment

by:alvareztg
Comment Utility
I believe you need sql server enterprise edition to utilize all physical CPU's were in the same boat right now.
0

Featured Post

Threat Intelligence Starter Resources

Integrating threat intelligence can be challenging, and not all companies are ready. These resources can help you build awareness and prepare for defense.

Join & Write a Comment

In this article I will describe the Copy Database Wizard method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
Via a live example, show how to shrink a transaction log file down to a reasonable size.
Viewers will learn how to use the UPDATE and DELETE statements to change or remove existing data from their tables. Make a table: Update a specific column given a specific row using the UPDATE statement: Remove a set of values using the DELETE s…

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

17 Experts available now in Live!

Get 1:1 Help Now