Link to home
Start Free TrialLog in
Avatar of prototye
prototyeFlag for United States of America

asked on

SQL Server Optimization of Query

Hi All,

I'm creating a set of stored procedures that will follow a search algorithm to return matches to a set of saved searches

The system has various BIT fields, INT fields date fields, and category fields (multiple INTs) that need to match through these SPs

cut down example:-

I have a list of products that have
colour
Size
Weight
Type (this can be cat1, cat2, cat3, cat4)

I have a saved search
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- cat 1 or cat 3

and another
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- Not Set

There would be four sets of results that could be returned,
products for a search
searches for a product
counts of each of the above (based on the each filters set for the search)

The functionality I would need to implement would be that If a value is not set in a search, this value is not filtered in the results.

The number of products would be under 50,000, and the number of searches  would be about 20,000 per year

My question is what would be the best way to achieve this?

saved search as 2 tables ?

Name SavedSearch
Columns
SavedSearchID INT
Colour varchar
Size  decimal
Weight decimal

another table
Name SavedSearchCategories
SavedSearchID INT
CategoryID INT


Would this be the best table structure?  With clustered index on the first columns?

What would be the best way of implementing when a filter value is not set not applying the filter for the SavedSearch table and then SavedSearchCategories table?
Avatar of Scott Pletcher
Scott Pletcher
Flag of United States of America image

To get good performance from this, you'll need to use dynamic SQL.  This allows you to only mention a column in the search if you're actually comparing a value on it.

Clustering could be tricky here.  If you (almost) always specify a certain criteria, then it's a good candidate for clustering.  For example, if (almost) every query specified a specific color(s), then color would be a good candidate lead clustering key.

I'd store the search criteria as one condition per column per row, with a "search_id" to identify each saved search.

Based on your original example:
"
I have a saved search
Colour --Red
Size -- Not Set
Weight -- Not Set
Type -- cat 1 or cat 3
"

The rows might be:
(SearchId, Column, Operator, Value)
(1, Colour, =, 'Red')
(1, Type, =, 1)
(1, Type, =, 3)

"IN" or "OR" could be used to allow multiple values for a single column, such as Type above; "AND" could be used to connect the different columns:

(Colour = 'Red') AND (Type = 1 OR Type = 3)


Personally, my preference is to have code that checks conditions when saved and fully generate the dynamic SQL at that time: this drastically reduces processing time later as the query code is already fully constructed.
Avatar of prototye

ASKER

Hi Scott,

This solution looks good if I was only running SQL for the Search i.e. Save a search and then run the SQL to retrieve the results for that search.
However I don't see how it would work if I was returning all the searches for a product

I have used dynamic SQL before and did think about using it for this, but was thinking that this would get complex fairly quickly and difficult to debug also, and would prefer looking at other solutions.

I was thinking last night that maybe in the SavedSearch table I have a bit field so that for Type there is a bit column "FilterOnCat" which would be set to 1 when there are any cats selected.  
I could then inner join the products to the SavedSearchCategories table on SavedSearch .FilterOnCat = 0 or product.CategoryID = SavedSearchCategories.CategoryID
You could do all that, but the optimizer will not really "understand" it, and as the query gets more complex, SQL is extremely likely to just give up trying to optimize and just fully scan (almost) every table.

You will eventually have to produce runnable code anyway; to me, more complexity makes it even more important to try to produce code as soon as possible in the translation process from requirements to actual code.  That way, you know asap if/when you hit the point where your translation code can't produce the query needed, and you need to adjust the generating code.
This wouldn't help when trying to match products to searches.  This would mean that when trying to match a product to searches where the values for a search are

where
(SearchId, Column, Operator, Value)
(1, Colour, =, 'Red')
(1, Type, =, 1)
(1, Type, =, 3)

and a product is
ProductID 1
Colour = Red
Cat = 2 or Cat = 3

Generating SQL
Select DISTINCT SearchID from ....
Where (Column = 'Colour' and value = 'Red') and ((Column = 'Type' and value = 2) or (Column = 'Type' and value = 3))

which would add the additional cost of searching the string column (named column)

Any suggestions for this?
ASKER CERTIFIED SOLUTION
Avatar of prototye
prototye
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
using nulls and isnull on joins to show all products no preference has been set for a certain filter
This does work for small tables, however when you get to tables with millions of rows you will find that there is no good alternative to Dynamic SQL as suggested by Scott.
This solution did make the queries more complex but was maintainable and optimized well