Solved

Slow performing database

Posted on 2016-07-17
27
81 Views
Last Modified: 2016-08-15
I have investigated a client's database today and the situation looks alarming. It has a size of 15,617 MB and space available of 4.59 MB with autogrowth of only 1 MB (obviously). I obtained this information by clicking on Properties in SSMS. This looks to me like the reason why they are having performance problems and yet there is 22,1 GB of free space available on that drive. Am I interpreting this information correctly
and if so how do you recommend that this problem be handled ?

Also they have a maintenance plan in which a 'Shrink database' task follows immediately after a 'Rebuild index' task. I am informed that this is bad practice as it will undo the good work done by the rebuild and cause more fragmentation. Please confirm.
0
Comment
Question by:Philip van Gass
  • 11
  • 8
  • 3
  • +3
27 Comments
 
LVL 7

Expert Comment

by:Jason clark
ID: 41716330
Yes, it is very important to monitor the storage system to see if storage has become a slow performance of the database. read here: http://www.sqlmvp.org/fixing-sql-server-database-performance-issues/
0
 
LVL 7

Expert Comment

by:Dung Dinh
ID: 41716338
'Shrink database' task is not recommended in PRODUCTION environment . Basically, the mechanism of Shrink Database is to move data pages and releases unused/ empty pages so it can cause another issue is Fragmentation.

And then 'Rebuild index' task is meaningless.
1
 

Author Comment

by:Philip van Gass
ID: 41716393
There are issues about the database size that I am not clear about. When I right click on the database and click on 'properties' I see the following:
(1) At 'General' it tells me that the database size is 15,617 MB and that there is 1.79 MB of space available
(2) At 'Files' under the heading 'Initial size' it tells me that the data file is 15,151 MB and the log file is 468 MB.
(3) Autogrowth for the data file is set at 1 MB while for the log file it is set at 10%.
(4) Checking the hard drive where the database is located I see that there is over 20 GB of free space available, which means there should be plenty of space for the database to grow into, But is it expanding into that space or are the configuration settings preventing it ?

Does the 1.79 MB of space available apply only before autogrowth kicks in or not ?
Why is autogrowth set at 1 MB for the data file but 10 % for the log file ? Does this make for good database administration ?
0
 
LVL 8

Expert Comment

by:Jose Torres
ID: 41717367
Does the 1.79 MB of space available apply only before autogrowth kicks in or not ?
No this reports how much free space is available in your current data/log file.
So between your data/log file of 15,617 mb there is only 1.79 mb free space.

Why is autogrowth set at 1 MB for the data file but 10 % for the log file ?
1. The person who set it up did not know what they were doing.
2. Had no space at the beginning.
3. Was a test system that then was made into a production system.
4. Somebody set up model with those values.

Either way it is totally inefficient.
As a general rule "I" do not use percentages for growth, since the files will grow each % will be greater than the next and the performance problems will get worse as time goes on.
On the 1mb growth, i would guess sql will be spending most of it's time allocating space.

I would suggest to start small for now and set it to 100 mb each.
I would assume that you are not capturing database size statistics to evaluate your actual growth rate.
You could use the msdb backup system views to give you an idea (but note this is not 100% accurate but will give you a ballpark idea of your growth).

Once you have some growth statistics then you can adjust for better management.

Additionally check the growth setting of model as any database created will inherit it's values.
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41717414
Why is autogrowth set at 1 MB for the data file but 10 % for the log file ? Does this make for good database administration ?

Those are SQL's defaults for the model db.  And the model db is the "shell" for new databases when created.  Thus, when the client db was created, it  inherited those settings ... as awful as they both are.  1MB is far too small, and % growth should never be used.

As to performance:

1) Verify that IFI (Instant File Initialization) is enabled; if not, enable it.  Do this before you increase the data file growth amount.  Then set the data growth amount to 100MB or so.  
Btw, use a SQL command for this NOT the gui!
I know you're probably not familiar with those commands, but they're not that difficult.  And using a command (A) gives you can accurate record of the change (B) avoids any "flakiness" that you sometimes get from the gui, (C) makes future changes much easier.

ALTER DATABASE <db_name> MODIFY FILE ( NAME = <logical_data_file_name>, GROWTH = 100MB )

You can use command "EXEC sp_helpfile" to get a list of the logical file names (first column in the sp_helpfile result).
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41717422
2) Shrink and re-grow the log, while setting a more appropriate growth size for it.  The reason for doing this is to make sure you don't have too many VLFs.  You can Google "SQL Server log VLF" if you really want to know the details.

CHECKPOINT;
DBCC SHRINKFILE(2); --shrink the log as much as possible
--increase file size in two steps so that VLFs are not too large
ALTER DATABASE <db_name> MODIFY FILE ( NAME = <logical_log_file_name>, SIZE = 256MB, GROWTH = 30MB )
ALTER DATABASE <db_name> MODIFY FILE ( NAME = <logical_log_file_name>, SIZE = 512MB )
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41717427
3) Never shrink the data files / database unless you have a very specific reason for it, such as a major purge of data.  Rebuild of indexes is good when needed, wasteful when not.  The problem with maintenance plans (mp) is that they rebuild every index every time (afaik: it's been a long time since I've seen a mp).  Minion Reindex, for example, is free and would rebuild only those indexes that need it.
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41717429
4) Real performance gain comes from tuning indexes, in particular first getting the best clustered index on every (significant) table.  That most often is NOT an identity column, no matter what some people say.  

That is, designing indexes does not consist merely in slapping an identity column on every table to clus, then building gazillion covering indexes, one for every major query, again, no matter what some people say/imply.

You'll need to review at least missing index stats, index usage stats and index operational stats to do proper index tuning.  Cardinality of data is necessary for even better tuning.  Unfortunately, there's no easy or magic way to do this, although naturally specific queries can give you a good overview to do the tuning from.
1
 
LVL 8

Assisted Solution

by:Jose Torres
Jose Torres earned 125 total points
ID: 41717502
Scott,

One correction before SQL 2016 the default growth setting for model was 10% for both data and log file.
Since his database autogrowth settings are 1mb and 10% either someone configured it like that or model was re-configured with those settings.
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41717532
CORRECTION: the param in the ALTER command is "FILEGROWTH" not just "GROWTH":

ALTER DATABASE <db_name> MODIFY FILE ( NAME = <logical_data_file_name>, FILEGROWTH = 100MB )
1
 
LVL 7

Expert Comment

by:Dung Dinh
ID: 41717960
Some experts guys provided you much information of configuration database. One of points I want to mention is when you create a database in SQL server, it copies all configurarions of model db for your own database if you dont define other values.
By default, 1MB for data file and 10% for log file are configuration of model db. So if you just create db by the following code CREATE DATABASE test, then it will have 1MB and 10% conf.

Actually, I am not sure what your problem exactly. Could you provide more detail.
-  If you run quries very slow, then we need to capture some information such as Execution Plan, waiting types or even blocking types that happened in your system to identify what bottlenecks, performance issues..
1
 

Author Comment

by:Philip van Gass
ID: 41719291
Thanks everyone for your advice.
I did some research into VLFs and found that the way to determine if there is an excessive number of these is to issue the DBCC LOGINFO command. I read that if the number of records is in excess of 10,000 then action should be taken. In this case there were 195 so I decided not to bother shrinking the transaction log. Currently it is using 4,3 % of 468 MB which amounts to 20.124 MB.

I have recommended to the client to change the Autogrowth feature to 100 MB as you advised and I shall do an investigation into tuning the indexes at some time in the future.
0
 
LVL 45

Expert Comment

by:Vitor Montalvão
ID: 41741920
Philip, you kept this question opened. Do you still need any input from the Experts?
0
What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.

 

Author Comment

by:Philip van Gass
ID: 41742194
Thanks for reminding me Vitor.
Since I last posted I have increased the autogrowth for both data and log files to 100 MB as recommended, and I removed the 'Shrink Database' task from the maintenance plan, and there appears to have been an improvement in response times.
But since I first posted this problem the database size has grown from 15,617 MB to 17,815 Mb and this is worrying the client as it seems rather large. I do not know how much data they are inputting each day but I shall have to make further enquiries and whether there is data that can be deleted.
I found that the 'History Cleanup' task made no difference. Maybe shrinking and regrowing the log file as described by Scott will make a difference ?
Also I did not do the IFI as mentioned by Scott. Is this important ?
0
 
LVL 45

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 125 total points
ID: 41742620
But since I first posted this problem the database size has grown from 15,617 MB to 17,815 Mb and this is worrying the client as it seems rather large.
Why? A transactional database isn't a static object so modifications will always happen. And you said the disk has 20GB free space so just keep an eye on it.

I do not know how much data they are inputting each day but I shall have to make further enquiries and whether there is data that can be deleted.
If differential backups are being performed daily on the database then you'll know how much daily changes is happening in the database.
Purge is always good to have if old data isn't needed. Otherwise ask the client if old data can be archived in any other location.

I found that the 'History Cleanup' task made no difference.
That task only deletes job history from msdb database. Doesn't have any impact on your case.


Maybe shrinking and regrowing the log file as described by Scott will make a difference ?
Only for performance but you already said that were very few VLFs so almost no impact.


Also I did not do the IFI as mentioned by Scott. Is this important ?
What's IFI?
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41742731
IFI is "Instant File Initialization" (as I explained earlier).  It prevents SQL from having to pre-format all data space with binary zeros before any of the space can be used (i.e., db activity must pause while SQL pre-writes the new entire new data allocation first with binary zeros, before any actual data can be written).  Because it prevents that pre-write, IFI greatly helps performance: the more file growth, the more it helps.

The next step (as, again, I noted above) would be to look at indexes, first and above all getting the best clustered index on every table.
No matter what some "experts" claim, NEVER assume that clustering on identity is "always" right or even generally preferable.  That is the single biggest myth in index design.  From my tuning experience, most often the best clus index does NOT use identity as the first column.
1
 

Author Comment

by:Philip van Gass
ID: 41743695
Thanks Scott and Vitor. I shall examine the indexes and determine how many of them are using the identity field as an index.
Regarding the IFI, can we still do this after I have changed the autogrowth value, or should I do it and then change autogrowth again ?
0
 
LVL 45

Assisted Solution

by:Vitor Montalvão
Vitor Montalvão earned 125 total points
ID: 41743722
Regarding the IFI, can we still do this after I have changed the autogrowth value, or should I do it and then change autogrowth again ?
This is a Windows setting so you can change it when you want and will impact only the future file growth.
1
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41744295
Of course there are cases where an identity column is the best clustered index.  The key :) thing to remember is to base it on how the table is actually searched, not just on some generic rule about "always use an identity" or some nursery-rhyme-like saying of "narrow, ever-increasing, etc.".
1
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 41744296
If you would like, I can post a script to give you some data on which to base index changes.
0
 

Author Comment

by:Philip van Gass
ID: 41744523
Hi Scott. Yes please send. I'm sure it will be valuable.
0
 
LVL 69

Accepted Solution

by:
ScottPletcher earned 250 total points
ID: 41744802
Code is below.  Adjust any table inclusions/exclusions if/as you prefer before you run.

For the initial run, be sure to leave
@list_missing_indexes
set to 1.  Once you capture that one time, you can turn it to 0 for future runs if it takes too long.

It still takes knowledge to use the query results, but it contains most of what you need to do index tuning.  You might also need additional cardinality (how unique is each value) info, but that's case by case so I don't include it in the initial, generic list.  I'd be happy to give you an overall approach for a sample critical table or two if you'd like, if/when/as I have time of course.

[Btw, this code could probably be streamlined somewhat at this point -- particularly the long list of JOINs/APPLYs on the final query -- but there's not enough payback in me doing that at this time.  That query still runs very quickly overall anyway.]

SET NOCOUNT ON;
SET DEADLOCK_PRIORITY -8;

IF OBJECT_ID('tempdb.dbo.#index_specs') IS NOT NULL
    DROP TABLE dbo.#index_specs
IF OBJECT_ID('tempdb.dbo.#index_missing') IS NOT NULL
    DROP TABLE dbo.#index_missing
IF OBJECT_ID('tempdb.dbo.#index_usage') IS NOT NULL
    DROP TABLE dbo.#index_usage

CREATE TABLE dbo.#index_specs (
    object_id int NOT NULL,
    index_id int NOT NULL,
    min_compression int NULL,
    max_compression int NULL,
    drive char(1) NULL,
    alloc_mb decimal(9, 1) NOT NULL,
    alloc_gb AS CAST(alloc_mb / 1024.0 AS decimal(9, 3)),
    used_mb decimal(9, 1) NOT NULL,
    used_gb AS CAST(used_mb / 1024.0 AS decimal(9, 3)),
    rows bigint NULL,
    table_mb decimal(9, 1) NULL,
    table_gb AS CAST(table_mb / 1024.0 AS decimal(9, 2)),
    size_rank int NULL,
    approx_max_data_width bigint NULL,
    UNIQUE CLUSTERED ( object_id, index_id )
    ) --SELECT * FROM #index_specs

DECLARE @list_missing_indexes bit
DECLARE @list_missing_indexes_summary bit
DECLARE @include_schema_in_table_names bit
DECLARE @table_name_pattern sysname
DECLARE @order_by smallint --1=table_name; 2=size; -2=size DESC;.
DECLARE @format_counts smallint --1=with commas, no decimals; 2/3=with K=1000s,M=1000000s, with 2=0 dec. or 3=1 dec. places;.
DECLARE @debug smallint

--NOTE: showing missing indexes can take some time; set to 0 if you don't want to wait.
SET @list_missing_indexes = 1
SET @list_missing_indexes_summary = 0
SET @include_schema_in_table_names = 0
SET @table_name_pattern = '%'
SET @order_by = -2
SET @format_counts = 3
SET @debug = 0

PRINT 'Started @ ' + CONVERT(varchar(30), GETDATE(), 120)

DECLARE @is_compression_available bit
DECLARE @sql varchar(8000)

IF CAST(SERVERPROPERTY('ProductVersion') AS varchar(30)) LIKE '9%'
OR (CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Developer%' AND 
    CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Enterprise%')
    SET @is_compression_available = 0
ELSE
    SET @is_compression_available = 1

SET @sql = '
INSERT INTO #index_specs ( object_id, index_id,' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    min_compression, max_compression,' END + '
    alloc_mb, used_mb, rows )
SELECT 
    base_size.object_id, 
    base_size.index_id, ' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    base_size.min_compression,
    base_size.max_compression,' END + '
    (base_size.total_pages + ISNULL(internal_size.total_pages, 0)) / 128.0 AS alloc_mb,
    (base_size.used_pages + ISNULL(internal_size.used_pages, 0)) / 128.0 AS used_mb,
    base_size.row_count AS rows
FROM (
    SELECT
        dps.object_id,
        dps.index_id, ' +   
        CASE WHEN @is_compression_available = 0 THEN '' ELSE '
        ISNULL(MIN(p.data_compression), 0) AS min_compression,
        ISNULL(MAX(p.data_compression), 0) AS max_compression,' END + '    
        SUM(dps.reserved_page_count) AS total_pages,
	    SUM(dps.used_page_count) AS used_pages,
	    SUM(CASE WHEN index_id IN (0, 1) THEN dps.row_count ELSE 0 END) AS row_count
    FROM sys.dm_db_partition_stats dps ' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    INNER JOIN sys.partitions p ON
        p.partition_id = dsp.partition_id ' END + '
    WHERE dps.object_id > 100
    GROUP BY 
        dps.object_id,
        dps.index_id
) AS base_size
LEFT OUTER JOIN (
    SELECT 
        it.parent_id,
        SUM(dps.reserved_page_count) AS total_pages,
	    SUM(dps.used_page_count) AS used_pages
    FROM sys.internal_tables it 
    INNER JOIN sys.dm_db_partition_stats dps ON 
        dps.object_id = it.parent_id
    WHERE it.internal_type IN ( ''202'', ''204'', ''211'', ''212'', ''213'', ''214'', ''215'', ''216'' )
    GROUP BY
        it.parent_id
) AS internal_size ON base_size.index_id IN (0, 1) AND internal_size.parent_id = base_size.object_id
'
IF @debug >= 1
    PRINT @sql
EXEC(@sql)

UPDATE [is]
SET approx_max_data_width = index_cols.approx_max_data_width
FROM #index_specs [is]
INNER JOIN (
    SELECT index_col_ids.object_id, index_col_ids.index_id, 
        SUM(CASE WHEN c.max_length = -1 THEN 16 ELSE c.max_length END) AS approx_max_data_width
    FROM (
        SELECT ic.object_id, ic.index_id, ic.column_id
        FROM sys.index_columns ic
        WHERE
            ic.object_id > 100
        UNION
        SELECT i_nonclus.object_id, i_nonclus.index_id, ic_clus.column_id
        FROM sys.indexes i_nonclus
        CROSS APPLY (
            SELECT ic_clus2.column_id
            FROM sys.index_columns ic_clus2
            WHERE
                ic_clus2.object_id = i_nonclus.object_id AND
                ic_clus2.index_id = 1 AND
                ic_clus2.key_ordinal > 0 --technically superfluous, since clus index can't have include'd cols anyway
        ) AS ic_clus
        WHERE
            i_nonclus.object_id > 100 AND
            i_nonclus.index_id > 1
    ) AS index_col_ids
    INNER JOIN sys.columns c ON c.object_id = index_col_ids.object_id AND c.column_id = index_col_ids.column_id
    GROUP BY index_col_ids.object_id, index_col_ids.index_id    
) AS index_cols ON index_cols.object_id = [is].object_id AND index_cols.index_id = [is].index_id

UPDATE ispec
SET table_mb = ispec_ranking.table_mb,
    size_rank = ispec_ranking.size_rank
FROM #index_specs ispec
INNER JOIN (
    SELECT *, ROW_NUMBER() OVER(ORDER BY table_mb DESC, rows DESC, OBJECT_NAME(object_id)) AS size_rank
    FROM (
        SELECT object_id, SUM(alloc_mb) AS table_mb, MAX(rows) AS rows
        FROM #index_specs
        GROUP BY object_id
    ) AS ispec_allocs        
) AS ispec_ranking ON
    ispec_ranking.object_id = ispec.object_id

IF @list_missing_indexes = 1
BEGIN
    SELECT
        IDENTITY(int, 1, 1) AS ident,
        DB_NAME(mid.database_id) AS Db_Name,
        CONVERT(varchar(10), GETDATE(), 120) AS capture_date,
        ispec.size_rank, ispec.table_mb,
        CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
             WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
                 WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
             WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
                 WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
             ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
        CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(mid.object_id /*, mid.database_id*/) + '.' 
             ELSE '' END + OBJECT_NAME(mid.object_id /*, mid.database_id*/) AS Table_Name,
        mid.equality_columns, mid.inequality_columns, mid.included_columns,        
        user_seeks, user_scans, cj1.max_days_active, unique_compiles, 
        last_user_seek, last_user_scan, 
        CAST(avg_total_user_cost AS decimal(9, 2)) AS avg_total_user_cost,
        CAST(avg_user_impact AS decimal(9, 2)) AS [avg_user_impact%],
        system_seeks, system_scans, last_system_seek, last_system_scan,
        CAST(avg_total_system_cost AS decimal(9, 2)) AS avg_total_system_cost,
        CAST(avg_system_impact AS decimal(9, 2)) AS [avg_system_impact%],
        mid.statement, mid.object_id, mid.index_handle
    INTO #index_missing
    FROM sys.dm_db_missing_index_details mid /*WITH (NOLOCK)*/
    CROSS JOIN (
        SELECT DATEDIFF(DAY, create_date, GETDATE()) AS max_days_active FROM sys.databases /*WITH (NOLOCK)*/ WHERE name = 'tempdb'
    ) AS cj1
    LEFT OUTER JOIN sys.dm_db_missing_index_groups mig /*WITH (NOLOCK)*/ ON
        mig.index_handle = mid.index_handle
    LEFT OUTER JOIN sys.dm_db_missing_index_group_stats migs /*WITH (NOLOCK)*/ ON
        migs.group_handle = mig.index_group_handle
    LEFT OUTER JOIN sys.dm_db_partition_stats dps /*WITH (NOLOCK)*/ ON
        dps.object_id = mid.object_id AND
        dps.index_id IN (0, 1)
    CROSS APPLY (
        SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
    ) AS ca1
    OUTER APPLY (
        SELECT ispec.table_mb, ispec.size_rank
        FROM dbo.#index_specs ispec
        WHERE
            ispec.object_id = mid.object_id AND
            ispec.index_id IN (0, 1)
    ) AS ispec
    WHERE
        1 = 1 
        AND mid.database_id = DB_ID()
        AND OBJECT_NAME(mid.object_id) LIKE @table_name_pattern
    ORDER BY
        --avg_total_user_cost * (user_seeks + user_scans) DESC,
        Db_Name,
        CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
        Table_Name,
        equality_columns, inequality_columns,
        user_seeks DESC
    SELECT *
    FROM #index_missing
    ORDER BY ident
    IF @list_missing_indexes_summary = 1
    BEGIN
        SELECT 
            derived.Size_Rank, derived.table_mb,
            derived.Table_Name, derived.Equality_Column, derived.Equality#, derived.User_Seeks, 
            ISNULL((SELECT SUM(user_seeks)
             FROM #index_missing im2
             CROSS APPLY DBA.dbo.DelimitedSplit8K (inequality_columns, ',') ds
             WHERE im2.Size_Rank = derived.Size_Rank AND
                 LTRIM(RTRIM(ds.Item)) = derived.Equality_Column
            ), 0) AS Inequality_Seeks,
            derived.User_Scans, derived.Last_User_Seek, derived.Last_User_Scan,
            derived.Max_Days_Active, derived.Avg_Total_User_Cost, derived.Approx_Total_Cost
        FROM (
            SELECT 
                Size_Rank, MAX(table_mb) AS table_mb, Table_Name, LTRIM(RTRIM(ds.Item)) AS Equality_Column, 
                SUM(user_seeks) AS User_Seeks, SUM(user_scans) AS User_Scans,
                MAX(last_user_seek) AS Last_User_Seek, MAX(last_user_scan) AS Last_User_Scan,
                MIN(max_days_active) AS Max_Days_Active,
                MAX(avg_total_user_cost) AS Avg_Total_User_Cost,
                (SUM(user_seeks) + SUM(user_scans)) * MAX(avg_total_user_cost) AS Approx_Total_Cost,
                MAX(ds.ItemNumber) AS Equality#
            FROM #index_missing
            CROSS APPLY DBA.dbo.DelimitedSplit8K (equality_columns, ',') ds
            WHERE equality_columns IS NOT NULL
            GROUP BY size_rank, Table_Name, LTRIM(RTRIM(ds.Item))
        ) AS derived
        ORDER BY Size_Rank, Table_Name, Approx_Total_Cost DESC        
    END --IF
END --IF


PRINT 'Index Usage Stats @ ' + CONVERT(varchar(30), GETDATE(), 120)

-- list index usage stats (seeks, scans, etc.)
SELECT 
    IDENTITY(int, 1, 1) AS ident,
    DB_NAME() AS db_name,
    --ispec.drive AS drv,
    ispec.size_rank, ispec.alloc_mb - ispec.used_mb AS unused_mb, 
    ispec.table_gb, ispec.alloc_gb AS index_gb,
    SUBSTRING('NY', CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) + 1, 1) +
    CASE WHEN i.is_unique = CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) THEN '' 
         ELSE '.' + SUBSTRING('NY', CAST(i.is_unique AS int) + 1, 1) END AS [Uniq?],
    REPLACE(i.name, oa1.table_name, '~') AS index_name,
    CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(i.object_id /*, DB_ID()*/) + '.' 
         ELSE '' END + OBJECT_NAME(i.object_id /*, i.database_id*/) AS Table_Name,
    CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
         WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
             WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
         WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
             WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
         ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
    --fc_row_count.formatted_value AS row_count,
    i.index_id,
    ispec.approx_max_data_width AS [data_width],
    SUBSTRING(key_cols, 3, 8000) AS key_cols, SUBSTRING(nonkey_cols, 3, 8000) AS nonkey_cols,
    ius.user_seeks, ius.user_scans, --ius.user_seeks + ius.user_scans AS total_reads,
    ius.user_lookups, ius.user_updates,
    dios.leaf_delete_count + dios.leaf_insert_count + dios.leaf_update_count as leaf_mod_count,
    dios.range_scan_count, dios.singleton_lookup_count,
    DATEDIFF(DAY, STATS_DATE ( i.object_id , i.index_id ), GETDATE()) AS stats_days_old,
    DATEDIFF(DAY, CASE 
        WHEN o.create_date > cj1.sql_startup_date AND o.create_date > o.modify_date THEN o.create_date 
        WHEN o.modify_date > cj1.sql_startup_date AND o.modify_date > o.create_date THEN o.modify_date 
        ELSE cj1.sql_startup_date END, GETDATE()) AS max_days_active,
    dios.row_lock_count, dios.row_lock_wait_in_ms,
    dios.page_lock_count, dios.page_lock_wait_in_ms,    
    ius.last_user_seek, ius.last_user_scan,
    ius.last_user_lookup, ius.last_user_update,
    fk.Reference_Count AS fk_ref_count,
    i.fill_factor,
    ius2.row_num,
    CASE 
        WHEN ispec.max_compression IS NULL THEN '(Not applicable)'
        WHEN ispec.max_compression = 2 THEN 'Page'
        WHEN ispec.max_compression = 1 THEN 'Row'
        WHEN ispec.max_compression = 0 THEN ''
        ELSE '(Unknown)' END AS max_compression,
    FILEGROUP_NAME(i.data_space_id) AS main_fg_name,
    ius.system_seeks, ius.system_scans, ius.system_lookups, ius.system_updates,
    ius.last_system_seek, ius.last_system_scan, ius.last_system_lookup, ius.last_system_update,
    GETDATE() AS capture_date
INTO #index_usage
FROM sys.indexes i /*WITH (NOLOCK)*/
INNER JOIN sys.objects o /*WITH (NOLOCK)*/ ON
    o.object_id = i.object_id
CROSS JOIN (
    SELECT create_date AS sql_startup_date FROM sys.databases /*WITH (NOLOCK)*/ WHERE name = 'tempdb'
) AS cj1
OUTER APPLY (
    SELECT CASE WHEN EXISTS(SELECT 1 FROM #index_specs [is] WHERE [is].object_id = i.object_id AND [is].index_id = 1)
        THEN 1 ELSE 0 END AS has_clustered_index
) AS cj2
LEFT OUTER JOIN dbo.#index_specs ispec ON
    ispec.object_id = i.object_id AND
    ispec.index_id = i.index_id
OUTER APPLY (
    SELECT
        ', ' + COL_NAME(ic.object_id, ic.column_id)
    FROM sys.index_columns ic /*WITH (NOLOCK)*/
    WHERE
        ic.key_ordinal > 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
    ORDER BY
        ic.key_ordinal
    FOR XML PATH('')
) AS key_cols (key_cols)
OUTER APPLY (
    SELECT
        ', ' + COL_NAME(ic.object_id, ic.column_id)
    FROM sys.index_columns ic /*WITH (NOLOCK)*/
    WHERE
        ic.key_ordinal = 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
    ORDER BY
        COL_NAME(ic.object_id, ic.column_id)
    FOR XML PATH('') 
) AS nonkey_cols (nonkey_cols)
LEFT OUTER JOIN sys.dm_db_partition_stats dps /*WITH (NOLOCK)*/ ON
    dps.object_id = i.object_id AND
    dps.index_id = i.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius /*WITH (NOLOCK)*/ ON
    ius.database_id = DB_ID() AND
    ius.object_id = i.object_id AND
    ius.index_id = i.index_id
LEFT OUTER JOIN (
    SELECT
        database_id, object_id, MAX(user_scans) AS user_scans,
        ROW_NUMBER() OVER (ORDER BY MAX(user_scans) DESC) AS row_num --user_scans|user_seeks+user_scans
    FROM sys.dm_db_index_usage_stats /*WITH (NOLOCK)*/
    WHERE
        database_id = DB_ID()
        --AND index_id > 0
    GROUP BY
        database_id, object_id
) AS ius2 ON
    ius2.database_id = DB_ID() AND
    ius2.object_id = i.object_id
LEFT OUTER JOIN (
    SELECT
        referenced_object_id, COUNT(*) AS Reference_Count
    FROM sys.foreign_keys /*WITH (NOLOCK)*/
    WHERE
        is_disabled = 0
    GROUP BY
        referenced_object_id
) AS fk ON
    fk.referenced_object_id = i.object_id
LEFT OUTER JOIN (
    SELECT *
    FROM sys.dm_db_index_operational_stats ( DB_ID(), NULL, NULL, NULL )
) AS dios ON
    dios.object_id = i.object_id AND
    dios.index_id = i.index_id
OUTER APPLY (
    SELECT OBJECT_NAME(i.object_id/*, DB_ID()*/) AS table_name
) AS oa1
CROSS APPLY (
    SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
) AS ca1
WHERE
    i.object_id > 100 AND
    i.is_hypothetical = 0 AND
    i.type IN (0, 1, 2) AND
    o.type NOT IN ( 'IF', 'IT', 'TF', 'TT' ) AND
    (
     o.name LIKE @table_name_pattern AND
     o.name NOT LIKE 'dtprop%' AND
     o.name NOT LIKE 'filestream[_]' AND
     o.name NOT LIKE 'MSpeer%' AND
     o.name NOT LIKE 'MSpub%' AND
     --o.name NOT LIKE 'queue[_]%' AND 
     o.name NOT LIKE 'sys%' 
    )
ORDER BY
    db_name,
    --i.index_id,
    --ius.user_seeks - ius.user_scans,
    CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
    --ius.user_scans DESC,
    --ius2.row_num, --user_scans&|user_seeks
    table_name, 
    -- list heap/clustered index first, then other index(es)
    CASE WHEN i.index_id IN (0, 1) THEN 1 ELSE 2 END, 
    key_cols

SELECT *
FROM #index_usage
ORDER BY ident

PRINT 'Ended @ ' + CONVERT(varchar(30), GETDATE(), 120)

SET DEADLOCK_PRIORITY NORMAL

Open in new window

0
 
LVL 69

Expert Comment

by:ScottPletcher
ID: 41750445
As I said, If you would like to post the results of the queries above, I can help you with a critical table or two.
0
 

Author Comment

by:Philip van Gass
ID: 41751911
Hi Scott. Yes it will be appreciated if you would help me with the interpretation of the results. I am going to run your query tonight when the users are offline and post the result in my next comment.
0
 

Author Comment

by:Philip van Gass
ID: 41752473
Scott, I ran your query against the customer database. Please view the attachment and explain how you would analyse the results.
Scott-s-query-results.xlsx
0
 
LVL 69

Assisted Solution

by:ScottPletcher
ScottPletcher earned 250 total points
ID: 41752512
Wow, overall the indexes look excellent.  That's unusual!  I'll have a few minor recommendations, and some explanation of what we're looking at.  But, those index stats tell me that any performance issues are far more likely to be related to RAM and I/O than to index definitions.
0
 

Author Closing Comment

by:Philip van Gass
ID: 41756173
I have to close this long running question now. Thanks to you all. The client has been very quiet and has not complained about performance issues for a long time. I shall be studying Scott's very complicated query with a view to the future and maybe raise another question if I need some clarification.
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Having an SQL database can be a big investment for a small company. Hardware, setup and of course, the price of software all add up to a big bill that some companies may not be able to absorb.  Luckily, there is a free version SQL Express, but does …
Introduction SQL Server Integration Services can read XML files, that’s known by every BI developer.  (If you didn’t, don’t worry, I’m aiming this article at newcomers as well.) But how far can you go?  When does the XML Source component become …
Familiarize people with the process of utilizing SQL Server functions from within Microsoft Access. Microsoft Access is a very powerful client/server development tool. One of the SQL Server objects that you can interact with from within Microsoft Ac…
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.

760 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

21 Experts available now in Live!

Get 1:1 Help Now