Solved

How best to allow SQL Server 2005 to utilise multiple processor cores?

Posted on 2008-06-20
11
1,959 Views
Last Modified: 2010-05-18
A client has migrated a database to a new multi-processor, multi-core server and is seeing a degradation of performance on intensive procedures that write to the database.
We think we have pinned this down to SQL Server using only one processor core at a time.
We think this is caused by there being only one datafile, and the architecture (Windows Server 2003) allowing only one thread to write to any one file at once, so we are effectively choking SQL Server to using one core at a time when writing.
This seems fairly fundamental in these days of multi-core, multi-processor systems but we can find surprisingly little information about it or others who have had this problem.
The database is large and complex so that creating a new database with multiple files and simply copying the data across is not a trivial task and before we embark on it I wanted to ask...
1) is our assumption correct, in that for SQL Server to utilise multiple cores/processors when writing, the database must be spread across (or utillise) multiple files.?
2) is there any guidance on planning the file use in order to maximise the core/procesor usage in future? (i.e. we want to do this only once and get maximum benefit)
3) what is the best way to perform the copying of the data so that
   a) it gets spread across the files and
   b) it is done as quiclly as possible
Thanks for any help on this.
0
Comment
Question by:keithfearnley
  • 6
  • 2
  • 2
  • +1
11 Comments
 
LVL 18

Expert Comment

by:jcoehoorn
ID: 21832205
If it's a performance issue only while writing on a new system, I'd take a look at your disks first.  Especially if the new system uses raid differently than the old system.
0
 
LVL 1

Author Comment

by:keithfearnley
ID: 21832685
Thanks jcoehoorn, the performance really does seem to be the processor - we have profiled and done tests. We can see the processor time maxing out at 16% (i.e. one of the eight processor cores available),  but if we run the same trial query concurrently on three copies of the database (i.e. each with one datafile) we utilise three processor-core's worth.  If we run three concurrent queries on the single database the processor again peaks and stays at 16% but the whole process takes three times the length of time.
Maybe this doesn't show what we think, but it looked to us like we were processor-bound rather than disk-bound and that using multiple files (although here we are using multiple databases too) allows multiple threads and hence more of our available processor power.
It could be that our interpretation is wrong - what would you suggest we do to check the disk usage, please?
0
 
LVL 18

Expert Comment

by:jcoehoorn
ID: 21832815
What version of sql server are you running?  Some of them lower ones will limit you.
0
 
LVL 1

Author Comment

by:keithfearnley
ID: 21832857
Hmm, it is SQL Server 2005 but I'm not exactly certain which edition - standard I think. I can confirm this on Monday.
Unless the limitation is on processors/cores per database then we can show that the instance of SQL Server will use more than one core/processor. I guess we could make a test database with multiple files and fill it and then query it to check that the system will use multiple cores/processors on a single database too, but I'd hoped to avoid the time that would take if possible.

Thanks again for the attention
0
 
LVL 142

Expert Comment

by:Guy Hengel [angelIII / a3]
ID: 21832931
usually, when you see a performance degradation when changing from a single-CPU to a multi-CPU machine, it's BECAUSE by default, each query will try to use all the CPU. please note the difference between 2 queries running each on 1 of the 2 (or more) CPU vs 1 query running on all (ie 2) CPU at once.
the first is controlled by the CPU affinity mask, while the second is controlled by 2 parameters: the "query parallelism" and a threshold value telling as from which "estimated cost" sql server will use the number of processors specified by "query parallelism".
the CPU affinity tells which CPU can be used by SQL server globally, which usually should use all CPU (unless you have another cpu-intensive application can also be given specific CPUs...)

however, in most cases, the default value of 0 for parallelism (means use all CPU for each query) is making too much overhead for even simple queries to parallelize it and later merge the results, so I usually change that setting to 1 (requires restart of sql server)
0
Zoho SalesIQ

Hassle-free live chat software re-imagined for business growth. 2 users, always free.

 
LVL 1

Author Comment

by:keithfearnley
ID: 21832997
angellll,
  That sounds very useful, many thanks.
  Unfortunately I now won't be able to try that out until Monday, but I will do then and post back results.
Regards,
Keith
0
 
LVL 1

Author Comment

by:keithfearnley
ID: 21844452
angelIII, having had a bit of time to check and think about this and to consult with a colleague, I'm not sure whether we have a misunderstanding. The database processes are effectively the only things running on the CPU and it is these queries running in isolation that are suffering timeouts (at the client), there is spare core/processor capacity available. We are effectiveoly in a test situation where we can run only this one query at once so there are no interaction effects.
These are older queries that are marked by the optimiser as being suitable for parallel running - whilst we could rewrite them to be more efficient, there are too many for this to be a short-term or cost-effective solution. The problem seems to be the opposite of what you have experienced... it isn't that SQL Server is parallelising and causing overhead in merging results, but that it seems only to be able to use one core and is maxing out that core. This seems (from googling around) to stem from the database being created with only one datafile.
Where you have had queries successfully using multiple cores, have you had multiple data files?
0
 
LVL 142

Accepted Solution

by:
Guy Hengel [angelIII / a3] earned 500 total points
ID: 21844675
>Where you have had queries successfully using multiple cores, have you had multiple data files?
not always, but even if there were multiple data files (including dedicated filegroups for indexes), I noticed this behaviour.

let me clarify that for Datawarehouse systems, this will be different, because the amount of data processes will be large/huge for most queries, and having them processed with the parallel execution will be indeed best option.

for OLTP systems however, the option is simply "deadly". at least with the threshold setting at the default value of "5". you could increase that to 100, and leave the parallel query execution enabled if you have a mixed usage database.

0
 
LVL 1

Author Comment

by:keithfearnley
ID: 22358251
OK guys, many apologies for leaving this so long (overlapping holidays and ensuing panic on return).
We have had no success with this following the suggestions given, but I am grateful for the time spent on it, particularly by Angelll.
I'd like to leave this open but don't want to clog up the board there, so I'm going to close it and award points in gratitude for the effort, despite us not having got to a conclusion - it's not Angelll's fault, after all.
0
 
LVL 1

Author Closing Comment

by:keithfearnley
ID: 31469201
Thanks for your effort!
0
 

Expert Comment

by:alvareztg
ID: 31419441
I believe you need sql server enterprise edition to utilize all physical CPU's were in the same boat right now.
0

Featured Post

Best Practices: Disaster Recovery Testing

Besides backup, any IT division should have a disaster recovery plan. You will find a few tips below relating to the development of such a plan and to what issues one should pay special attention in the course of backup planning.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Let's review the features of new SQL Server 2012 (Denali CTP3). It listed as below: PERCENT_RANK(): PERCENT_RANK() function will returns the percentage value of rank of the values among its group. PERCENT_RANK() function value always in be…
Everyone has problem when going to load data into Data warehouse (EDW). They all need to confirm that data quality is good but they don't no how to proceed. Microsoft has provided new task within SSIS 2008 called "Data Profiler Task". It solve th…
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.
Using examples as well as descriptions, and references to Books Online, show the documentation available for datatypes, explain the available data types and show how data can be passed into and out of variables.

863 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

24 Experts available now in Live!

Get 1:1 Help Now