Indexes - is EXEC sp_updatestats sufficient?


I have a large couple of tables which are constantly growing
I need to increase performance as the speeds accessing data are slow

I try and run EXEC sp_updatestats whenever i can
Does this create indexes, and will it help with performance

If not, please can someone help me with the code to run to create indexes?
version: SQL server 2012 business intelligence
The specs are currently quite low, and we plan to upgrade in future, but its a dedicated DB server VM, with 1GB ram (I KNOW!) intel xenon quade core 2.5GHZ CPU
Running on win 2008 R2
I need to squeeze as much performance as possible out of it
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

Scott PletcherConnect With a Mentor Senior DBACommented:
With so little RAM(!), you must get the correct clustered indexes on the table.  I suggest waiting to rebuild indexes until you're sure what the clus index should be, since a new clus index would cause a rebuild anyway.  Since you query mostly by datetime, that will very likely be the best clus index for many tables.

Please run the following code and post the results:


DECLARE @list_missing_indexes bit
DECLARE @table_name_pattern sysname

SET @list_missing_indexes = 1
SET @table_name_pattern = '%'

PRINT 'Started @ ' + CONVERT(varchar(30), GETDATE(), 120)

--SELECT create_date FROM sys.databases WITH (NOLOCK) WHERE name = N'tempdb'

IF @list_missing_indexes = 1
        GETDATE() AS capture_date,
        DB_NAME(mid.database_id) AS Db_Name,
        OBJECT_NAME(mid.object_id /*, mid.database_id*/) AS Table_Name,
        mid.equality_columns, mid.inequality_columns, mid.included_columns,
         mid.statement, mid.object_id, mid.index_handle
    FROM sys.dm_db_missing_index_details mid WITH (NOLOCK)
    LEFT OUTER JOIN sys.dm_db_missing_index_groups mig WITH (NOLOCK) ON
        mig.index_handle = mid.index_handle
    LEFT OUTER JOIN sys.dm_db_missing_index_group_stats migs WITH (NOLOCK) ON
        migs.group_handle = mig.index_group_handle
    --order by
        --DB_NAME, Table_Name, equality_columns
        1 = 1
        AND mid.database_id = DB_ID()
        AND OBJECT_NAME(mid.object_id) LIKE @table_name_pattern
        --avg_total_user_cost * (user_seeks + user_scans) DESC,
        Db_Name, Table_Name, equality_columns, inequality_columns

-- list index usage stats (seeks, scans, etc.)
    ius2.row_num, DB_NAME() AS db_name, AS index_name,
    OBJECT_NAME(i.object_id/*, DB_ID()*/) AS table_name,
    i.index_id, --ius.user_seeks + ius.user_scans AS total_reads,
    SUBSTRING(key_cols, 3, 8000) AS key_cols, SUBSTRING(nonkey_cols, 3, 8000) AS nonkey_cols,
    ius.user_seeks, ius.user_scans, ius.user_lookups, ius.user_updates,
    ius.last_user_seek, ius.last_user_scan, ius.last_user_lookup, ius.last_user_update,
    fk.Reference_Count AS fk_ref_count,
    FILEGROUP_NAME(i.data_space_id) AS filegroup_name,
    ca1.sql_up_days AS days_in_use,
    ius.system_seeks, ius.system_scans, ius.system_lookups, ius.system_updates,
    ius.last_system_seek, ius.last_system_scan, ius.last_system_lookup, ius.last_system_update
FROM sys.indexes i WITH (NOLOCK)
    o.object_id = i.object_id
    SELECT DATEDIFF(DAY, create_date, GETDATE()) AS sql_up_days FROM sys.databases WHERE name = 'tempdb'
) AS ca1
        ', ' + COL_NAME(object_id, ic.column_id)
    FROM sys.index_columns ic
        ic.key_ordinal > 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
    FOR XML PATH('')
) AS key_cols (key_cols)
        ', ' + COL_NAME(object_id, ic.column_id)
    FROM sys.index_columns ic
        ic.key_ordinal = 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
        COL_NAME(object_id, ic.column_id)
    FOR XML PATH('')
) AS nonkey_cols (nonkey_cols)
LEFT OUTER JOIN sys.dm_db_partition_stats dps WITH (NOLOCK) ON
    dps.object_id = i.object_id AND
    dps.index_id = i.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius WITH (NOLOCK) ON
    ius.database_id = DB_ID() AND
    ius.object_id = i.object_id AND
    ius.index_id = i.index_id
        database_id, object_id, MAX(user_scans) AS user_scans,
        ROW_NUMBER() OVER (ORDER BY MAX(user_scans) DESC) AS row_num --user_scans|user_seeks+user_scans
    FROM sys.dm_db_index_usage_stats WITH (NOLOCK)
        database_id = DB_ID()
        --AND index_id > 0
        database_id, object_id
) AS ius2 ON
    ius2.database_id = DB_ID() AND
    ius2.object_id = i.object_id
        referenced_object_id, COUNT(*) AS Reference_Count
    FROM sys.foreign_keys
        is_disabled = 0
) AS fk ON
    fk.referenced_object_id = i.object_id
    --EXISTS(SELECT 1 FROM sys.indexes i2 WHERE i2.object_id = i.object_id AND i2.index_id = 0) AND --$T
    i.object_id > 100 AND
    i.is_hypothetical = 0 AND
    i.type IN (0, 1, 2) AND
    o.type NOT IN ( 'IF', 'IT', 'TF', 'TT' ) AND
    ( LIKE @table_name_pattern AND NOT LIKE 'dtprop%' AND NOT LIKE 'filestream[_]' AND NOT LIKE 'MSpeer%' AND NOT LIKE 'MSpub%' AND NOT LIKE 'queue[_]%' AND NOT LIKE 'sys%' AND
    --AND OBJECT_NAME(i.object_id, DB_ID()) IN ('tbl1', 'tbl2', 'tbl3')
    --row_count DESC,
    --ius.user_scans DESC,
    --ius2.row_num, --user_scans&|user_seeks
    -- list clustered index first, if any, then other index(es)
    db_name, table_name, CASE WHEN i.index_id IN (0, 1) THEN 1 ELSE 2 END, index_name

PRINT 'Ended @ ' + CONVERT(varchar(30), GETDATE(), 120)
Carl TawnSystems and Integration DeveloperCommented:
No, sp_updatestats won't do anything with the indexes, it simply records statistics about the spread of values across one or more columns in a table.

Do you currently have any indexes on your tables? They may simply need rebuilding. If not, you will need to look at your query plans to determine where indexes may be beneficial. You will of course need to take into account the extra disk capacity needed to store any indexes, and also potential extra tempdb space needed to rebuild them.

For starters you can tun the following script against each user database to check the fragmentation levels of any existing indexes:
SELECT OBJECT_NAME(ind.OBJECT_ID) AS TableName, AS IndexName, indexstats.index_type_desc AS IndexType, 
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, NULL) indexstats 
INNER JOIN sys.indexes ind  
ON ind.object_id = indexstats.object_id 
AND ind.index_id = indexstats.index_id 
WHERE indexstats.avg_fragmentation_in_percent > 30 
ORDER BY indexstats.avg_fragmentation_in_percent DESC

Open in new window

websssAuthor Commented:

>Do you currently have any indexes on your tables?
No, just the primary key Index that were automatically created

You said
> If not, you will need to look at your query plans to determine where indexes may be beneficial.

Lots of my queries are done based on Dates, i.e. we want to get the latest update (date/time)
And also - show me date specific data

Here are the results of your script
I would appreciate your feedback and what you would do next?
[Webinar] Improve your customer journey

A positive customer journey is important in attracting and retaining business. To improve this experience, you can use Google Maps APIs to increase checkout conversions, boost user engagement, and optimize order fulfillment. Learn how in this webinar presented by Dito.

Carl TawnSystems and Integration DeveloperCommented:
You need to start by rebuilding those indexes. Anything where the fragmentation is above around 30% you need to rebuild. You can do the rebuild by running the following for each table:
USE <your_database>;

ALTER INDEX ALL ON <your_table_name>

Open in new window

Bear in mind that for BI edition this will be an OFFLINE operation, meaning the tables will be unavailable while the indexes rebuild - so you may need to look at scheduling some downtime, depending on your operational requirements.

You should also set up a SQL Agent job to periodically check fragmentation levels and run a REBUILD/REORGANIZE as needed.
websssAuthor Commented:
Thanks guys

Scott, sorry for the arrival in family
Attached are the 2 results you have asked for
I'm not exactly sure what i'm looking at sorry, but grateful for the advice
Scott PletcherSenior DBACommented:
Let's focus on the largest tables first, since they have the biggest impact:

tblEvents_Common  =~  1,962,316 rows
tblCommonTrackingData  =~  1,205,914 rows
tblMSMQ_TrackingMaster  =~  926,256 rows
wlt_tblEvents_CommonEvents_Logged  =~  411,089

The second query lists index usage.  Unfortunately SQL has only been up continuously for 4 days, but, even so, some needed index changes are clear.

The clustered index is by far the most important index for performance.  Some need changed.  You may get some push back since too many people have been fed this myth about "always" using an identity for the clustering key.

But for best performance, you need to change the clustered indexes as follows:

tblCommonTrackingData  ==  vpkDeviceID
tblTrip_Master  ==  vpkDeviceID
wlt_tblEvents_CommonEvents_Logged  ==  vpkDeviceID

If you want, you can add that table's own identity column, which is the current clustering key, as the second/last column of the new clustering index so you can explicitly define the clus index as "UNIQUE", but it's not really necessary, SQL will make each key unique itself when it needs to.

We'll need to look at queries that use these tables:
to determine if they need a different clus index.

The basic steps to change the clus index are:
1) script out drops and creates for all existing index(es)
2) modify the create scripts to make the existing pk clus index a pk nonclus index (don't run them yet, just make the script changes)

3) drop all nonclus index(es)

4) drop the current clus index

5) create the new clus index

6) recreate the pk as nonclus

7) recreate all other non clus index(es)

Some one will probably say "We don't want to change the clus index, what else can we do to improve performance?".  You would have to create lots of nonclus, covering indexes, which would eventually make individual queries run faster, but overall the system will have a much higher load than it does now.
websssAuthor Commented:
Thanks Scott

I have been reading lots, and watching lots of videos, and I actually now see where your advice is coming from
I've just done the one table for now, and removed the Primary key and created 2 indexes
Can you comment if this is correct (and if there is indeed no primary key needed now?)
table structure
I did some performance tests before and after by running an SP
before the changes i got 17 secs for first run, then between 8-11 secs for subsequent runs

After the updates (and restarting sql service) I got 3 seconds for first run and 1 sec for all other runs

Thank you so much for your help, I feel really empowered now!
websssAuthor Commented:
I just tested my web app
Before the upgrades it was taking 4 mins and 48 seconds to log in
It now takes 10 seconds to login!!!!

I'm hoping that primary key question i asked isn't needed as i'm loving the new speeds
websssAuthor Commented:
You mentioned

We'll need to look at queries that use these tables:
to determine if they need a different clus index.

Is there any easy way to see what queries are using these?
Scott PletcherSenior DBACommented:
Great!  Glad it's working out so well.  [The biggest thing is that you were willing to try a different clus index key.  Too many people won't differ from the "expert recommendation" to "always" use identity as the clus key.]

To find table uses for other two tables, you can scan for those table names in rows of a system view that hold all db source code.  Naturally this only applies to code stored in the db, such as stored procs, functions, triggers, etc.:

USE [db_name]

FROM sys.sql_modules
    definition LIKE '%tblEvents[_]Common%' OR
    definition LIKE '%tblMSMQ_TrackingMaster%'

There's nothing per se wrong with having a PK on the table.  You can have one if you want it.  Just make sure the PK is not clustered.  Create the proper clustered index first, then add the PK.  [You should always create a clus index first anyway.]

>> Before the upgrades it was taking 4 mins and 48 seconds to log in
It now takes 10 seconds to login!!!! <<

That's a FANTASTIC improvement ... but, frankly, 10 secs (just) to login is still too long, unless an awful lot is going on in the login process.
websssAuthor Commented:
Thanks scott
you've earnt the 500 points so i'll start a new question for the 10 login time
But before I do, what data would you need to see
i.e. should i run a SQL profiler and then paste in the results to the new question???

There is a quite a bit more that just a username and pass look up, but i would like to get it under that time

In chrome half the time seems to be sending the request (the animation on the tab goes one way) and the other half is receiving data (the animation changes direction)
Not sure if this is relevant?
Scott PletcherSenior DBACommented:
I'd address all the other large tables first.  Until you get the correct clustered indexes, you're pushing a boulder up a hill anyway.  Once all large tables have the best clus index, then you can work further on the logon process, if needed.
All Courses

From novice to tech pro — start learning today.