SQL Server Optimization of Query

Posted on 2013-05-13
Last Modified: 2014-03-22
Hi All,

I'm creating a set of stored procedures that will follow a search algorithm to return matches to a set of saved searches

The system has various BIT fields, INT fields date fields, and category fields (multiple INTs) that need to match through these SPs

cut down example:-

I have a list of products that have
Type (this can be cat1, cat2, cat3, cat4)

I have a saved search
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- cat 1 or cat 3

and another
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- Not Set

There would be four sets of results that could be returned,
products for a search
searches for a product
counts of each of the above (based on the each filters set for the search)

The functionality I would need to implement would be that If a value is not set in a search, this value is not filtered in the results.

The number of products would be under 50,000, and the number of searches  would be about 20,000 per year

My question is what would be the best way to achieve this?

saved search as 2 tables ?

Name SavedSearch
SavedSearchID INT
Colour varchar
Size  decimal
Weight decimal

another table
Name SavedSearchCategories
SavedSearchID INT
CategoryID INT

Would this be the best table structure?  With clustered index on the first columns?

What would be the best way of implementing when a filter value is not set not applying the filter for the SavedSearch table and then SavedSearchCategories table?
Question by:prototye
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 4
  • 2
LVL 69

Expert Comment

by:Scott Pletcher
ID: 39163232
To get good performance from this, you'll need to use dynamic SQL.  This allows you to only mention a column in the search if you're actually comparing a value on it.

Clustering could be tricky here.  If you (almost) always specify a certain criteria, then it's a good candidate for clustering.  For example, if (almost) every query specified a specific color(s), then color would be a good candidate lead clustering key.

I'd store the search criteria as one condition per column per row, with a "search_id" to identify each saved search.

Based on your original example:
I have a saved search
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- cat 1 or cat 3

The rows might be:
(SearchId, Column, Operator, Value)
(1, Colour, =, 'Red')
(1, Type, =, 1)
(1, Type, =, 3)

"IN" or "OR" could be used to allow multiple values for a single column, such as Type above; "AND" could be used to connect the different columns:

(Colour = 'Red') AND (Type = 1 OR Type = 3)

Personally, my preference is to have code that checks conditions when saved and fully generate the dynamic SQL at that time: this drastically reduces processing time later as the query code is already fully constructed.

Author Comment

ID: 39164096
Hi Scott,

This solution looks good if I was only running SQL for the Search i.e. Save a search and then run the SQL to retrieve the results for that search.
However I don't see how it would work if I was returning all the searches for a product

I have used dynamic SQL before and did think about using it for this, but was thinking that this would get complex fairly quickly and difficult to debug also, and would prefer looking at other solutions.

I was thinking last night that maybe in the SavedSearch table I have a bit field so that for Type there is a bit column "FilterOnCat" which would be set to 1 when there are any cats selected.  
I could then inner join the products to the SavedSearchCategories table on SavedSearch .FilterOnCat = 0 or product.CategoryID = SavedSearchCategories.CategoryID
LVL 69

Expert Comment

by:Scott Pletcher
ID: 39165058
You could do all that, but the optimizer will not really "understand" it, and as the query gets more complex, SQL is extremely likely to just give up trying to optimize and just fully scan (almost) every table.

You will eventually have to produce runnable code anyway; to me, more complexity makes it even more important to try to produce code as soon as possible in the translation process from requirements to actual code.  That way, you know asap if/when you hit the point where your translation code can't produce the query needed, and you need to adjust the generating code.
Optimizing Cloud Backup for Low Bandwidth

With cloud storage prices going down a growing number of SMBs start to use it for backup storage. Unfortunately, business data volume rarely fits the average Internet speed. This article provides an overview of main Internet speed challenges and reveals backup best practices.


Author Comment

ID: 39177413
This wouldn't help when trying to match products to searches.  This would mean that when trying to match a product to searches where the values for a search are

(SearchId, Column, Operator, Value)
(1, Colour, =, 'Red')
(1, Type, =, 1)
(1, Type, =, 3)

and a product is
ProductID 1
Colour = Red
Cat = 2 or Cat = 3

Generating SQL
Select DISTINCT SearchID from ....
Where (Column = 'Colour' and value = 'Red') and ((Column = 'Type' and value = 2) or (Column = 'Type' and value = 3))

which would add the additional cost of searching the string column (named column)

Any suggestions for this?

Accepted Solution

prototye earned 0 total points
ID: 39934175
I eventually created a separate table for each search filter using nulls and isnull on joins to show all products no preference has been set for a certain filter
e.g. when no colour preference has been set the search table SearchColour would have a SearchID column linking back to the main search and a value column with a value of null.  the join would be products.Colour = ISNULL(SearchColour.value ,products.Colour).  This optimizes pretty well
LVL 75

Expert Comment

by:Anthony Perkins
ID: 39935691
using nulls and isnull on joins to show all products no preference has been set for a certain filter
This does work for small tables, however when you get to tables with millions of rows you will find that there is no good alternative to Dynamic SQL as suggested by Scott.

Author Closing Comment

ID: 39947185
This solution did make the queries more complex but was maintainable and optimized well

Featured Post

How our DevOps Teams Maximize Uptime

Our Dev teams are like yours. They’re continually cranking out code for new features/bugs fixes, testing, deploying, responding to production monitoring events and more. It’s complex. So, we thought you’d like to see what’s working for us. Read the use case whitepaper.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this article we will get to know that how can we recover deleted data if it happens accidently. We really can recover deleted rows if we know the time when data is deleted by using the transaction log.
Slowly Changing Dimension Transformation component in data task flow is very useful for us to manage and control how data changes in SSIS.
Via a live example, show how to extract information from SQL Server on Database, Connection and Server properties
Using examples as well as descriptions, and references to Books Online, show the documentation available for datatypes, explain the available data types and show how data can be passed into and out of variables.

734 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question