Link to home
Start Free TrialLog in
Avatar of NBF
NBF

asked on

Table with very low page count and very high fragmentation causing performance issue. Please help.

We have 3 tables in a database with single digit page counts and very high fragmentation of 70-90%.  When I run a script to do an offline rebuild of indexes with fragmentation higher than 30% we end up with errors and extreme performance issues when accessing data in these tables.

When I right click on the table and do a manual rebuild of the index the performance is back to instantaneous but the fragmentation level stays quite high on the table.

Why am I experiencing such poor performance accessing data in a table with such a low single digit page count like this?
Avatar of Kevin Cross
Kevin Cross
Flag of United States of America image

What is the actual table structure (especially data type of the fields in the index)?
What kind of index is it?
Avatar of NBF
NBF

ASKER

I attached a screenshot of the 3 tables and data structure along with index.  Does this help?
capture1.JPG
Capture2.JPG
Capture3.JPG
It does help some.  Which of the indexes is giving you a problem?  For the non-clustered indexes, does the name contain the included columns?  What is the fill factor of each index?

In general, we will need more exact information on the challenge.

EDIT: to help you understand my line of questioning, I have seen a similar issue in my environment with UNIQUEIDENTIFIER fields.  Because the values do not come in sequentially, the system may have to do page splits to insert a new value into index.  Therefore, if your indexed values are not sequential like an integer identity, the fragmentation can arise from the constant splits.  One fix is to lessen the fill factor from 100% to something appropriate like 65-70%.

P.S. Did you look at physical fragmentation yet?  This and other resource issues can contribute to the fragmentation here.
Avatar of NBF

ASKER

I have included more screenshots on 2 of the table indexes.  

The fill factor checkbox is unchecked on both of these indexes.  Are you suggesting  we check the fill factor box and set it to 65-70?  Would we do this on both clustered and non clustered indexes?
capture4.JPG
Capture5.JPG
Capture6.JPG
Capture7.JPG
If the table has fewer than 8 or 9 pages, SQL will use "mixed extents" to store it.  Therefore, the table could be scattered over as many as one physical extent for every page in the table.  You can't directly prevent SQL from doing this.  MS's official response to this is "just ignore it, it doesn't matter for tables that small."

One workaround is to add dummy rows to the table to force it bigger than 9 pages -- that will cause SQL to use "uniform extents" for that table.  Once that's done, I don't think SQL ever goes back to mixed extents as long as the table is not dropped.  Thus, you can then safely delete the dummy rows.  Unfortunately, it's not always practical to add dummy rows, but it's worth the effort sometimes to get the tables more compact and reduce I/O (no matter how MS rationalizes the issue).
ASKER CERTIFIED SOLUTION
Avatar of Kevin Cross
Kevin Cross
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of NBF

ASKER

These tables are in fact between 4 and 7 pages.

So are you saying we can temporarily add dummy rows and then remove the data afterwards using TSQL and it will remain with a larger page size avoiding this issue in the future?

After we add the dummy rows and remove the data will we have a larger page size?  How many rows would we need to add for this to be sufficient?

We seem to have lots of tables that have low page count and high fragmentation in our database design.
Good point, Scott!  I also agree, that is not rational.
Avatar of NBF

ASKER

I have been told the dummy row thing is not possible due to the fact we would have to add fake products, etc...  Any other way around this?  Seems crazy that we have to manually rebuild the index  using the GUI each morning to keep our application from hanging when accessing data from these tables.

Would the fill rate thing help possibly?
It helped us; however, we do not have the low page count problem.  I glossed over that part, so I am not sure if it will help given scott's comment on the internals of SQL handling low record count.

You can't directly prevent SQL from doing this.  MS's official response to this is "just ignore it, it doesn't matter for tables that small."
Given the above, it may be worth your time to explore what queries are hanging than worrying about the fragmentation.  In other words, you may be able to fix this via optimization of queries.  You likely cannot make drastic design changes, but can at least attempt some.  For example, the ParentVendorId in the index for Vendors keeps jumping out at me as odd.  Because this can be NULL, I am not sure the value it adds to the index (at least thinking how most ERP systems access vendor information) as well as the fact that it shows the table is not fully normalized.  It could improve performance to have an associative table that links a vendor to its parent with both Ids as the primary key.  Therefore, you only have rows for records where there is a relationship.  Further, you can join this table efficiently to your main vendor table.
>> I have been told the dummy row thing is not possible due to the fact we would have to add fake products, etc <<

Yep, that's quite common, as I mentioned in my original post ["Unfortunately, it's not always practical to add dummy rows."].


I'd suggest doing two clustered index rebuilds, one  immediately after the other, for each table less than 8 pages.  Since the tables are so small, the rebuilds should happen virtually instantly anyway.

In the first rebuild, specify an extremely low fillfactor, say, 10 to 20.  That will force the table to be at least 5 to 10 times as many pages, and thus exceed 9 pages total, which in turn will cause SQL to do only uniform extents for that table.

Then rebuild with your normal fillfactor, say 90%.  SQL should continue to use uniform extents, even when the table shrinks back below 9 pages.
Nice, Scott!  I guess I always associated the "the table shrinks back below 9 pages" to mean it re-inherits the same issues.  The crazy things DBAs have to do to work around database design quirks. *smile*

If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents.
~Technet
They really should add the word "subsequent" to the last sentence, so it is clear that "all [subsequent] allocations...are in uniform extents."
If the rebuild works, when you check the table again, you will see less than 1%, or even 0%, fragmentation.
Avatar of NBF

ASKER

They didn't want to do a special job just to rebuild this table twice a night.  They ended up adding another index and adjusting the current indexed columns and that apparently resolved the issue the past two days.  I wish I could be more specific.  Thanks for the suggestions and I will keep this in mind for the future.
Good luck with it.
Yes, I hope it works out.  Scott, good information on the small table internals.

Best regards,

Kevin
Not good enough for any points, though, I guess :-)