Link to home
Start Free TrialLog in
Avatar of Bird757
Bird757Flag for South Africa

asked on

SQL Indexes - 2X the database size - is this a problem?

I have a SQL Database with 1 primary data table in which most fields are used (about 20 columns). Index optimisation over a few months has resulted in 5 indexes. The concern I have is the database is 50Gb but the indexes take a further 95Gb. This simply feel wrong to me.

Is there a tool that can analyse the existing indexes, and the frequency of use of the included elements and suggest more efficient formats?

I have used profiler to capture transactions, and used database tuning to recommend indexes. I have then checked these against my knowledge of the queries and combined where logic says I should. This has been an on-going process and the database is fast; no queries taking longer than a second or 2. But having the indexes take up more "space" than the data suggests I have missed a trick.
ASKER CERTIFIED SOLUTION
Avatar of jogos
jogos
Flag of Belgium image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Bird757

ASKER

Thank you. That points me at which indexes are being used abd how often. From there I should be able to get the index sizes down.
Important is not to look at each index in the same way

Clustered index (basicly your table, size !) is the most important to start. The choise of the  clustered key is important to how inserts of new records will fragment your index or not. But also the size of the clustered index key will define the size of all other indexes.
https://www.simple-talk.com/sql/learn-sql-server/effective-clustered-indexes/

Non clustered indexes can be suggested from an opimiser, but not always good to blindly create the suggested indexes, it can be good to extend a previous index so it covers also the new suggested index.    And always a trade-off should I add more columns to the index key or the include columns so the index is all the query needs and the table won't be used during the query. Or do you limit the index in size for good filtering and then get the data from the table.

Don't forget that an optimiser can suggest an index based on the usage of your database, but   the gain can also be in not selecting fields you don't realy need (at that point) so your query can be very good off with a smaller index.  Example see https://www.experts-exchange.com/questions/28163616/Recursive-Query-CTE-Performing-very-slow.html
So when a query for current employees could gain with adding the active-column to the index and adding the birthday and the marital status as included columns for that index.  But maybe it's smarter to not adding those columns to the index but removing them from the query while the only thing you need is it's name and department (SELECT * is the killer)

For limiting the index size take also a look at filtered indexes, they can very good for optimizing frequently fired queries and not taking up much space (disk space, disk  I/O and cache) http://msdn.microsoft.com/en-us/library/cc280372(v=sql.100).aspx

Multiple indexes means also more work for each insert, update and delete. If you include a daily changing column in multiple indexes then that can be very costly.