A client has migrated a database to a new multi-processor, multi-core server and is seeing a degradation of performance on intensive procedures that write to the database.
We think we have pinned this down to SQL Server using only one processor core at a time.
We think this is caused by there being only one datafile, and the architecture (Windows Server 2003) allowing only one thread to write to any one file at once, so we are effectively choking SQL Server to using one core at a time when writing.
This seems fairly fundamental in these days of multi-core, multi-processor systems but we can find surprisingly little information about it or others who have had this problem.
The database is large and complex so that creating a new database with multiple files and simply copying the data across is not a trivial task and before we embark on it I wanted to ask...
1) is our assumption correct, in that for SQL Server to utilise multiple cores/processors when writing, the database must be spread across (or utillise) multiple files.?
2) is there any guidance on planning the file use in order to maximise the core/procesor usage in future? (i.e. we want to do this only once and get maximum benefit)
3) what is the best way to perform the copying of the data so that
a) it gets spread across the files and
b) it is done as quiclly as possible
Thanks for any help on this.