# Calculate the difference between two timestamps - Redux

The solution works just great until I have a table with over 1M records, at 1.5M records the query goes from mere seconds to who knows when, I usually get impatient and kill it after 30 minutes. I even created an index on the columns that I am using in the script, that took all of three seconds, thinking that that would improve performance. After a little research it appears to be related to the ROW_NUMBER() call or maybe it is just CTE functions in general do not scale well, doesn't really matter. What I would like to know is if this code can be rewritten to perform better on really large data sets, not use ROW_NUMBER(), not use CTE, just whatever makes it work. It really should not take a long time to find the difference between two numbers in consecutive rows and the original solution works superbly for all of my other smaller tables.

Thanks

###### Who is Participating?

Commented:
But, if you're not insist on "not uses temp table"

this code run only 1 second for 100k rows sample
3 seconds for 250k sample

Remark : my system physical disk using kind of external RamDrive, which will considering disk access is pretty fast compare to regular disk system.

``````Declare @R TABLE (id int primary key IDENTITY(1,1), Weeks int, Seconds int)
INSERT INTO @r (weeks, Seconds)
select Weeks, Seconds
from RandomData
order by Weeks, Seconds

SELECT R1.*, (R2.Weeks - R1.Weeks) * 604800 + (R2.Seconds - R1.Seconds) TimeDiff
FROM @R R1
INNER JOIN
@R R2 ON R1.ID = R2.ID-1
``````
0

Data EngineerCommented:
Give a try.
``````SELECT ID,
Weeks,
Seconds,
NextSeconds - Seconds TimeDiff
FROM (SELECT *,
(SELECT MIN(Seconds)
FROM your_table t2
WHERE t2.Seconds > t1.Seconds) NextSeconds
FROM your_table t1) t1
ORDER BY Seconds
``````
0

Commented:
for that over 1M table.
do you have any condition to narrow down the concern data? (I mean put "WHERE" inside the CTE part)

for ex. compare only for some certain Weeks etc.
0

Author Commented:
no, it is all data from a sensor array and this is only a few minutes worth of data.
0

Commented:
this is the best I can perform (query using 90% CPU on Quad CPU Intel Xeon 2.66GHz 6 Cores)
sample data 50k rows , 30 seconds
sample data 100k rows, 2 minutes

assume these condition were true
1. the data in RandomData to perform, having more than 1 value for Weeks
2. have index over field (Weeks, Seconds)

``````SELECT ID, Weeks, Seconds,
(SELECT MIN(Weeks * 604800 + Seconds)
FROM RandomData t2 WITH (index=IX_Weeks_Seconds)
WHERE t2.Weeks IN (t1.Weeks, t1.Weeks + 1) AND t2.Seconds > CASE WHEN t2.Weeks = t1.Weeks THEN t1.Seconds ELSE 0 END
) - (t1.Weeks * 604800 + t1.Seconds) TimeDiff
FROM RandomData t1
ORDER BY Weeks, Seconds
``````
0

Author Commented:
Sharath's solution looks promising so far, how would I add a test for all values over a threshold?

JoeNuvo, it appears that not using a temp table was a bad constraint for very large tables, I will check your solution now.
0

Data EngineerCommented:
>> Sharath's solution looks promising so far, how would I add a test for all values over a threshold?

Did you run my query against your whole table? What is the query execution time?
0

Author Commented:
This is the test: 1408926 rows already sorted data, no indexes.

JoeNuvo: 4  seconds
Sharath_123:  killed the test after 30 minutes

Same table with an index on the weeks and seconds
JoeNuve: 4 seconds
Sharath_123: 3 seconds to create the index, 4 seconds for the timediff

So, it turns out that creating an index is extremely important when working on the master table and the results are essentially a tie (which is not a bad thing.)

0

Author Commented:
Forgot to bump the points
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.