Table with very low page count and very high fragmentation causing performance issue. Please help.

We have 3 tables in a database with single digit page counts and very high fragmentation of 70-90%.  When I run a script to do an offline rebuild of indexes with fragmentation higher than 30% we end up with errors and extreme performance issues when accessing data in these tables.

When I right click on the table and do a manual rebuild of the index the performance is back to instantaneous but the fragmentation level stays quite high on the table.

Why am I experiencing such poor performance accessing data in a table with such a low single digit page count like this?
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Kevin CrossChief Technology OfficerCommented:
What is the actual table structure (especially data type of the fields in the index)?
What kind of index is it?
NBFAuthor Commented:
I attached a screenshot of the 3 tables and data structure along with index.  Does this help?
Kevin CrossChief Technology OfficerCommented:
It does help some.  Which of the indexes is giving you a problem?  For the non-clustered indexes, does the name contain the included columns?  What is the fill factor of each index?

In general, we will need more exact information on the challenge.

EDIT: to help you understand my line of questioning, I have seen a similar issue in my environment with UNIQUEIDENTIFIER fields.  Because the values do not come in sequentially, the system may have to do page splits to insert a new value into index.  Therefore, if your indexed values are not sequential like an integer identity, the fragmentation can arise from the constant splits.  One fix is to lessen the fill factor from 100% to something appropriate like 65-70%.

P.S. Did you look at physical fragmentation yet?  This and other resource issues can contribute to the fragmentation here.
Get Blueprints for Increased Customer Retention

The IT Service Excellence Tool Kit has best practices to keep your clients happy and business booming. Inside, you’ll find everything you need to increase client satisfaction and retention, become more competitive, and increase your overall success.

NBFAuthor Commented:
I have included more screenshots on 2 of the table indexes.  

The fill factor checkbox is unchecked on both of these indexes.  Are you suggesting  we check the fill factor box and set it to 65-70?  Would we do this on both clustered and non clustered indexes?
Scott PletcherSenior DBACommented:
If the table has fewer than 8 or 9 pages, SQL will use "mixed extents" to store it.  Therefore, the table could be scattered over as many as one physical extent for every page in the table.  You can't directly prevent SQL from doing this.  MS's official response to this is "just ignore it, it doesn't matter for tables that small."

One workaround is to add dummy rows to the table to force it bigger than 9 pages -- that will cause SQL to use "uniform extents" for that table.  Once that's done, I don't think SQL ever goes back to mixed extents as long as the table is not dropped.  Thus, you can then safely delete the dummy rows.  Unfortunately, it's not always practical to add dummy rows, but it's worth the effort sometimes to get the tables more compact and reduce I/O (no matter how MS rationalizes the issue).
Kevin CrossChief Technology OfficerCommented:
If you have ruled out other issues, such as the physical fragmentation, the fill factor can help if inserted data is random relative to existing data.

For example, if ItemSKUs is your item master table and you are not using auto-numbered part numbers, the following scenario can happen.

Create item '12345'
Create item '13456'
Create item '12346'

The index must insert the third record between the first two.  Moreover, you can have perfectly fine pages then remove a bunch of records, leaving empty spots.  This can be seen more with self-referencing as you appear to have in the Vendors table.  When you first create a record, you may not define ParentVendorID, so the record inserts based on Code-TypeId-VendorID combination order.  Later, the user links the vendor to a parent vendor.  The index logically can maintain the order of the records without changing the physical placement just by updating the next, or previous, record links.  However, this leaves fragmented pages from a physical storage standpoint.

I hope those two examples are clear.

Before you waste a good deal of time, though, how much are these indexes used?  Make sure they are the right ones for your queries (table usage) before investing too much time.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
NBFAuthor Commented:
These tables are in fact between 4 and 7 pages.

So are you saying we can temporarily add dummy rows and then remove the data afterwards using TSQL and it will remain with a larger page size avoiding this issue in the future?

After we add the dummy rows and remove the data will we have a larger page size?  How many rows would we need to add for this to be sufficient?

We seem to have lots of tables that have low page count and high fragmentation in our database design.
Kevin CrossChief Technology OfficerCommented:
Good point, Scott!  I also agree, that is not rational.
NBFAuthor Commented:
I have been told the dummy row thing is not possible due to the fact we would have to add fake products, etc...  Any other way around this?  Seems crazy that we have to manually rebuild the index  using the GUI each morning to keep our application from hanging when accessing data from these tables.

Would the fill rate thing help possibly?
Kevin CrossChief Technology OfficerCommented:
It helped us; however, we do not have the low page count problem.  I glossed over that part, so I am not sure if it will help given scott's comment on the internals of SQL handling low record count.

You can't directly prevent SQL from doing this.  MS's official response to this is "just ignore it, it doesn't matter for tables that small."
Given the above, it may be worth your time to explore what queries are hanging than worrying about the fragmentation.  In other words, you may be able to fix this via optimization of queries.  You likely cannot make drastic design changes, but can at least attempt some.  For example, the ParentVendorId in the index for Vendors keeps jumping out at me as odd.  Because this can be NULL, I am not sure the value it adds to the index (at least thinking how most ERP systems access vendor information) as well as the fact that it shows the table is not fully normalized.  It could improve performance to have an associative table that links a vendor to its parent with both Ids as the primary key.  Therefore, you only have rows for records where there is a relationship.  Further, you can join this table efficiently to your main vendor table.
Scott PletcherSenior DBACommented:
>> I have been told the dummy row thing is not possible due to the fact we would have to add fake products, etc <<

Yep, that's quite common, as I mentioned in my original post ["Unfortunately, it's not always practical to add dummy rows."].

I'd suggest doing two clustered index rebuilds, one  immediately after the other, for each table less than 8 pages.  Since the tables are so small, the rebuilds should happen virtually instantly anyway.

In the first rebuild, specify an extremely low fillfactor, say, 10 to 20.  That will force the table to be at least 5 to 10 times as many pages, and thus exceed 9 pages total, which in turn will cause SQL to do only uniform extents for that table.

Then rebuild with your normal fillfactor, say 90%.  SQL should continue to use uniform extents, even when the table shrinks back below 9 pages.
Kevin CrossChief Technology OfficerCommented:
Nice, Scott!  I guess I always associated the "the table shrinks back below 9 pages" to mean it re-inherits the same issues.  The crazy things DBAs have to do to work around database design quirks. *smile*

If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents.
They really should add the word "subsequent" to the last sentence, so it is clear that "all [subsequent] allocations...are in uniform extents."
Scott PletcherSenior DBACommented:
If the rebuild works, when you check the table again, you will see less than 1%, or even 0%, fragmentation.
NBFAuthor Commented:
They didn't want to do a special job just to rebuild this table twice a night.  They ended up adding another index and adjusting the current indexed columns and that apparently resolved the issue the past two days.  I wish I could be more specific.  Thanks for the suggestions and I will keep this in mind for the future.
Scott PletcherSenior DBACommented:
Good luck with it.
Kevin CrossChief Technology OfficerCommented:
Yes, I hope it works out.  Scott, good information on the small table internals.

Best regards,

Scott PletcherSenior DBACommented:
Not good enough for any points, though, I guess :-)
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server 2008

From novice to tech pro — start learning today.