Avatar of Marianne VAN WYK
Marianne VAN WYK
Flag for South Africa asked on

SQL Page-Splits, Shrinking of Database and how to resolve fragmentation

We are looking for a way to deal with Page splits and resolving fragmentation caused by shrinking the database and deleting or inserting records.

Currently, we are experiencing performance-related issues and would like to find a way of resolving them.

I know doing by doing some research I was advised at looking by introducing routine jobs to manage the page splits and help with better indexing?

I would like to collaborate on this to find a suitable and best practice to resolve our current issues.

DatabasesMicrosoft SQL Server* Page split* Fragmentation

Avatar of undefined
Last Comment

8/22/2022 - Mon

View this solution by signing up for a free trial.
Members can start a 7-Day free trial and enjoy unlimited access to the platform.
See Pricing Options
Start Free Trial
Ask your own question & get feedback from real experts
Find out why thousands trust the EE community with their toughest problems.
Scott Pletcher

First review the indexes to make sure you have the right indexes in place.  It's critical that you have the best clustered index key column(s) on every table (of any significant size), because that is the single biggest overall performance factor for the table.  (Hint: for most tables, the best clus key is NOT an identity column).  

But, after the clus index selection/verification, be sure to also review non-clus index(es).  And include the SQL stats about missing indexes and current index usage in your review.  After all, what's the point of rebuilding indexes that aren't correct, i.e. they don't best address all your data access needs, to begin with??

Second, determine whether data compression (row/page) is available to you or not (for SQL 2016+, it is; for SQL 2014, 2012, 2008, you must have Enterprise Edition for it to be available).  If so, check to see which tables would benefit from it and get it applied.  In general, nearly all very large tables benefit greatly from page compression.

After you've done all that, you're ready to set up an index maintenance plan.
Marianne VAN WYK

Thanks for all the information provided above. After further investigation, I have noticed we are using SQL 2017 express and the limitation being that we can not use SQL Server Agent, that prevents us from setting up routine/scheduled jobs

Additional information gathered:

I also discovered - that multiple of my clustered indices - had an average fragmentation of above 35% where some of the averages being in the ranges of 90% and pages greater than 1000.


1. I understand that this can have a big impact on the performance of the database and how each query is handled. However I would like to know what would be best practice on improving the performance and defragmentation of each table manually?

2. We have multiple client's connecting to the database (both server and client machines are up to specification - hardware related) running into "out of memory exceptions" within our VB.net application and this is causing us to look at bettering our methods on how we maintain our database. I would like to know if this might be a possible symptom of fragmentation?

While you can not use SQL agent,
you can use scheduled tasks with powershell to run things on the sql server.

Since you are using an express version, no backups so the logs if you have full, are not being backed up and thus do not release space within the log file....
Experts Exchange has (a) saved my job multiple times, (b) saved me hours, days, and even weeks of work, and often (c) makes me look like a superhero! This place is MAGIC!
Walt Forbes
Marianne VAN WYK

Thanks, I will really look into that suggestion and see if it will work for us as an alternative.

The database recovery model is set as Simple and the log file is not that big, am I missing something in regards to the above-mentioned comment?

Additionally I came accross the following code that assists me in getting a report on fragmentation
USE DatabaseName --database choice
		dbschemas.[name] as [Schema],
		dbtables.[name] as [Table],
		dbindexes.[name] as [Index],
from sys.dm_db_index_physical_stats (DB_ID(), NULL,NULL,NULL,NULL) as indexstats
join sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id]
join sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id]
	--and dbschemas.[name] = 'dbo'
join sys.indexes as dbindexes on dbindexes.[object_id] = indexstats.[object_id]
	and indexstats.index_id = dbindexes.index_id
	and dbindexes.[name] is not null
where indexstats.database_id = DB_ID()
	and indexstats.page_count > 500
order by indexstats.avg_fragmentation_in_percent desc

Open in new window

and to rebuild these table's indices identified above, will improve performance. Does this make sense?

The auto rebuild script will look like this:
---Auto rebuild or reorganize Indexes based on Fragmentation


--Step 1
	--#Create Table
USE DatabaseName 
CREATE TABLE  --drop table
	DatabaseName		SYSNAME NULL
	,SchemaName		SYSNAME NULL
	,TableName			SYSNAME NULL
	,IndexName			SYSNAME NULL
	,[Fragmentation%]	FLOAT NULL
	,record_count		INT NULL
	,page_count			INT NULL

--Step 2
	--Insert Index Info into Table
INSERT INTO ##FragIndexes

 DB_NAME(DB_ID()) AS DatabaseName
 ,ss.name AS SchemaName
 ,OBJECT_NAME (s.object_id) AS TableName
 ,i.name AS IndexName
 ,s.avg_fragmentation_in_percent AS [Fragmentation%]
FROM sys.dm_db_index_physical_stats(db_id(),NULL, NULL, NULL, 'SAMPLED') s
INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id]
	AND s.index_id = i.index_id
	and i.[name] is not null
INNER JOIN sys.objects o ON s.object_id = o.object_id
INNER JOIN sys.schemas ss ON ss.[schema_id] = o.[schema_id]
	--and ss.[name] = 'dbo'
WHERE s.database_id = DB_ID()
--AND s.record_count > 0
AND s.page_count > 500

--Step 3 Check table out
select *
from ##FragIndexes


--Step 4
	--Build Query String based on the fragmentation of indexes to rebuild or reorganize
SET @TableIndexes = ''
 @TableIndexes = @TableIndexes +
 WHEN [Fragmentation%] > 35
   THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
      + QUOTENAME(SchemaName) + '.'
      + QUOTENAME(TableName) + ' REBUILD;'
 WHEN [Fragmentation%] > 5
    THEN CHAR(10) + 'ALTER INDEX ' + QUOTENAME(IndexName) + ' ON '
    + QUOTENAME(SchemaName) + '.'
    + QUOTENAME(TableName) + ' REORGANIZE;'
FROM ##FragIndexes
WHERE [Fragmentation%] > 5

--Step 5
	--Set a length for the query and to be used in large data warehouses
DECLARE @StartOffset INT
SET @StartOffset = 0
SET @Length = 4000
WHILE (@StartOffset < LEN(@TableIndexes))
 PRINT SUBSTRING(@TableIndexes, @StartOffset, @Length)
 SET @StartOffset = @StartOffset + @Length
PRINT SUBSTRING(@TableIndexes, @StartOffset, @Length)

--Select @TableIndexes

--Step 6

	--Execute Index Query
EXECUTE sp_executesql @TableIndexes

--Step 7
	--Clean up
DROP TABLE ##FragIndexes

Open in new window


If you need backup that can be restored to a point in time, the transaction log backup is needed meaning the recovery model needs to be full.

The link includes different examples to pull data on whose basis fragmentation of indexes.

The example I would consider is in item D. Of the ms link

You can comment out (-- exec @command)

And run the sql
It will output the index and its fragmentation level and what shoukd be run, reorganize or rebuild of the index..