chaitu chaitu
asked on
how to make the date wise report query faster
Below query displays the datewise report from '2014-04-01' to '2014-04-30'.but query is taking 1 min time to display the report.
i used SQL profiler and also database tuned advisor but its not giving recommendations.
i am able to write big queries but not efficient ones.
i want some tutorials or videos on how to make the queries fast?how to know which condition is taking lot of time and what index needs to created.i have seen execution plan but could not get much out of it.
below conditions are mainly taking lot of time to display the query.how to rewrite it to make it fast?plz note that i created the Non clustered index on EMP_ID in the EMP_INSTANCES table and included the columns(CREATE_DATE,PURGE_ DATE) in that index.
D.[Date] >= CONVERT(VARCHAR(10), EI.CREATE_DATE, 101) AND
(EI.PURGE_DATE IS NULL OR D.[Date] <= CONVERT(VARCHAR(10), EI.PURGE_DATE, 101))
i used SQL profiler and also database tuned advisor but its not giving recommendations.
i am able to write big queries but not efficient ones.
i want some tutorials or videos on how to make the queries fast?how to know which condition is taking lot of time and what index needs to created.i have seen execution plan but could not get much out of it.
below conditions are mainly taking lot of time to display the query.how to rewrite it to make it fast?plz note that i created the Non clustered index on EMP_ID in the EMP_INSTANCES table and included the columns(CREATE_DATE,PURGE_
D.[Date] >= CONVERT(VARCHAR(10), EI.CREATE_DATE, 101) AND
(EI.PURGE_DATE IS NULL OR D.[Date] <= CONVERT(VARCHAR(10), EI.PURGE_DATE, 101))
DECLARE
@STARTDATE DATETIME ,
@ENDDATE DATETIME ,
@CUSTOMERID INTEGER
SET @STARTDATE='2014-04-01 00:00'
SET @ENDDATE='2014-04-30 00:00'
SET @CUSTOMERID=1234567
;WITH Dates AS
(
SELECT DATEADD(DAY,number,@STARTDATE) [Date]
FROM master.dbo.spt_values
WHERE type = 'P'
AND number >= 0
AND DATEADD(DAY,number,@STARTDATE) <= @ENDDATE
)
SELECT D.[Date] AS [DATE], LOC.NAME, sum(A.FILESIZE) FILESIZE
FROM Dates D,
EMP_INSTANCES EI ,
EMP A ,
POLICIES P,
CUSTOMERS C,
LOCATIONS LOC
WHERE
a.id=ai.EMP_id and
a.policy_id=p.id and
c.id=p.customer_id and
sl.id=ai.location_id and
C.ID = @CUSTOMERID AND
D.[Date] >= CONVERT(VARCHAR(10), EI.CREATE_DATE, 101) AND
(EI.PURGE_DATE IS NULL OR D.[Date] <= CONVERT(VARCHAR(10), EI.PURGE_DATE, 101))
GROUP BY D.[Date], LOC.NAME
ASKER
i created the Non clustered index on EMP_ID in the EMP_INSTANCES table and included the columns(CREATE_DATE,PURGE_ DATE) in that index.
do you want me to create any other indexes on EMP_INSTANCES table?remaining indexes will be created by defauly if it is a primay key.
do you want me to create any other indexes on EMP_INSTANCES table?remaining indexes will be created by defauly if it is a primay key.
i would create NC indexes on every table associated with the query to be honest. I am willing to wager that it will cut the time down significantly
ASKER
may i know NC indexes on which columns?
you mean policy_id in emp table
and customer_id in policies table.
do i missing anything?
you mean policy_id in emp table
and customer_id in policies table.
do i missing anything?
My fault My communication was poor :)
I would create NC indexes on Every PK in the tables from which you are referencing for your query
I would create NC indexes on Every PK in the tables from which you are referencing for your query
ASKER
created indexes.still taking same time.
ASKER
if i comment this condition output is coming below 30 secs.but if i put this condition its taking more than 2 mins.
D.[Date] >= CONVERT(VARCHAR(10), EI.CREATE_DATE, 101) AND
D.[Date] >= CONVERT(VARCHAR(10), EI.CREATE_DATE, 101) AND
Why are you doing the CONVERT in the first place? It removes the ability to use an index on CREATE_DATE and PURGE_DATE.
Try to use joins on related fields.
Leave extra limiting conditions in the where clause.
Check this article on joins.
https://www.experts-exchange.com/Database/MS_Access/A_3597-INNER-JOIN-a-Number-Of-Tables.html
I see you are using 6 tables. Assuming an average of 10 records in each table.
In the current case you are using a cartesian join where each record from table 1 is repeated with every record of table 2, giving 100 records.
The next table will make the set 1000 records. Including all 6 tables, you are dealing with 1,000,000 record set loaded in memory, then applying the logic and conversion.
With joins the records are limited by each join, and a few number of records are in memory to apply the extra logic and conversion.
Leave extra limiting conditions in the where clause.
Check this article on joins.
https://www.experts-exchange.com/Database/MS_Access/A_3597-INNER-JOIN-a-Number-Of-Tables.html
I see you are using 6 tables. Assuming an average of 10 records in each table.
In the current case you are using a cartesian join where each record from table 1 is repeated with every record of table 2, giving 100 records.
The next table will make the set 1000 records. Including all 6 tables, you are dealing with 1,000,000 record set loaded in memory, then applying the logic and conversion.
With joins the records are limited by each join, and a few number of records are in memory to apply the extra logic and conversion.
ASKER
Qlemo,
i need to take date only not datetime thats why used CONVERT function.
i need to take date only not datetime thats why used CONVERT function.
And you can't manage to store dates only in those two fields? CREATE_DATE sounds like there is no time portion.
Otherwise you can create computed columns on both datetime fields to set an index on them.
Otherwise you can create computed columns on both datetime fields to set an index on them.
ASKER
but we are time portion as well.what do you mean by computed columns on both datetime fields?
You can create "virtual" columns as expressions in a table, and the DBMS will manage their content whenever the column(s) they are based on are changed. In this case you would create one column for convert(CREATE_DATE, 101) and convert(PURGE_DATE, 101).
After that, you can index those new columns. That should speed up search, but introduces higher update/insert costs for maintaining those columns and indexes.
After that, you can index those new columns. That should speed up search, but introduces higher update/insert costs for maintaining those columns and indexes.
No feedback on my comment: http:#a40039371
hnasr,
If the Query Optimizer does its job well, it doesn't matter for inner joins whether to include conditions in ON or WHERE. However, it is good style to clearly separate join conditions from filter conditions.
For outer joins it is necessary to do so, because that makes a difference.
If the Query Optimizer does its job well, it doesn't matter for inner joins whether to include conditions in ON or WHERE. However, it is good style to clearly separate join conditions from filter conditions.
For outer joins it is necessary to do so, because that makes a difference.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
vasto is right - in your comparisons you're comparing a datetime (d.Date) with a varchar ( CONVERT(VARCHAR(10), EI.PURGE_DATE, 101) ). At the very least, convert ei.PurgeDate into a datetime, not a varchar. If it's already a date then just leave it alone!
My suggestion would be to index the tables associated with the query to see if that improves performance... Have you tried that yet ?