Solved

How to make DBReindex use multiple cores?

Posted on 2008-10-29
3
436 Views
Last Modified: 2010-08-05
The related question was mine also.

A large table took almost 15 hours to complete a rebuild of the indexes. While it was running, I noticed that it was only using one core out of 8. Is there a way to cause the REINDEX to use multiple cores or will it only run on one core since it is one process (query)?
0
Comment
Question by:dgerler
  • 2
3 Comments
 
LVL 1

Expert Comment

by:timberbt
ID: 22831484
According to MS it is supposed to use multiple CPU's...

After seeing what is below you may want to watch some performance counters while this reIndex is going on and see if it is disk IO that is the bottleneck (quick and easy way is to watch the Current Disk Queue Length on the drives containing the DB) since that dbReindex will attempt to actually resort by your primary key that may be the major slowdown.

Just for kicks also give running dbIndexDefrag and Update Statistics and see if that takes a comparable amount of time or not.

Is this database really fragmented too?  What is the output of ShowContig?

http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx

There are two distinct advantages of running DBCC DBREINDEX over DBCC INDEXDEFRAG:
"      
DBCC DBREINDEX rebuilds statistics automatically during the rebuild of the indexes; this can have dramatic improvements on workload performance.
"      
DBCC DBREINDEX can take advantage of multiple-processor computers and can be significantly faster when rebuilding large or heavily fragmented indexes.

0
 

Author Comment

by:dgerler
ID: 22837567
It did not appear to be using more than one core. 7 cores were at about 5-10% of usage while one core was at about 100%. I checked the max degree of Parallelism and it is showing config and run values of 4.

That article certainly confirmed what I already new about several of my larger tables. They definately need to be reindexed. Here's the results of ShowContig for two of the tables. The table I reindex in the other areticle had a scan density of about 25% also before the reindex and 99.09% after.

DBCC SHOWCONTIG scanning 'TicketImage' table...
Table: 'TicketImage' (1537856991); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 2777259
- Extents Scanned..............................: 348340
- Extent Switches..............................: 1368527
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 25.37% [347158:1368528]
- Logical Scan Fragmentation ..................: 39.49%
- Extent Scan Fragmentation ...................: 40.52%
- Avg. Bytes Free per Page.....................: 1510.7
- Avg. Page Density (full).....................: 81.34%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.


DBCC SHOWCONTIG scanning 'TicketHeader' table...
Table: 'TicketHeader' (1137855566); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 145315
- Extents Scanned..............................: 18237
- Extent Switches..............................: 72728
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 24.98% [18165:72729]
- Logical Scan Fragmentation ..................: 28.14%
- Extent Scan Fragmentation ...................: 48.43%
- Avg. Bytes Free per Page.....................: 1704.0
- Avg. Page Density (full).....................: 78.95%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
0
 
LVL 1

Accepted Solution

by:
timberbt earned 500 total points
ID: 22840058
Yeah, no doubt they need to be re-indexed.    According to that MS document it sounds like it *SHOULD* be using multiple CPU's to do the reIndexing.   The Max degree of Parallelism is the only swith I know of internal to MSSQL that could really impact that.   I would still probably take time to examine the disk IO as well like I said, but I don't think that is the point.

I'd probably contact MS and submit a ticket for it.  According to them it should be working and it doesn't appear that it is.

If you have a comparable table (like the ones above) I'd still go ahead and give indexDefrag and update statistics a chance just to see how long they take.   I tend to use those quite often in my own maintenance and upkeep as they are an online operation.  So I'll run that on all my tables throughout the week.
0

Featured Post

Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Why is this different from all of the other step by step guides?  Because I make a living as a DBA and not as a writer and I lived through this experience. Defining the name: When I talk to people they say different names on this subject stuff l…
International Data Corporation (IDC) prognosticates that before the current the year gets over disbursing on IT framework products to be sent in cloud environs will be $37.1B.
The viewer will learn how to create a normally distributed random variable in Excel, use a normal distribution to simulate the return on an investment over a period of years, Create a Monte Carlo simulation using a normal random variable, and calcul…
Viewers will learn how to use the SELECT statement in SQL and will be exposed to the many uses the SELECT statement has.

947 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

17 Experts available now in Live!

Get 1:1 Help Now