Does @@DBTS ever overflow?

I'm using Timestamp columns to keep track of changes in my database and periodically process rows with a value between my previous "high water mark" and the current value of @@DBTS. I would have a start value (lower bound) to be the previously used upper bound and then the new upper bound would be the current value of @@DBTS.

This is working fine, but I was just wondering...isn't @@DBTS going to overflow at some point?

@@DBTS is BINARY(8) which is a finite size and granted it's a really big number there's still a ceiling there. Sooner or later, what I expect to happen is that the value of @@DBTS resets to zero and starts again. At that point, my existing logic would miss the changes in that batch.

I reckon I just need to be aware of this and write into my code to use two different conditions.

Normally (when start < end) use this logic:

SELECT * FROM [TableName] WHERE Timestamp > @Start AND Timestamp < @End

Overflow condition (when end < start) use this logic:

SELECT * FROM [TableName] WHERE Timestamp > @Start OR Timestamp < @End

Can anyone confirm that I should bother checking for end < start and use these two different conditions or would most people not bother? I think it's a bit academic as I may be long dead and buried before our db ever reaches that value - but I like to think of best practices. :)
Who is Participating?
Mark WillsConnect With a Mentor Topic AdvisorCommented:
Well, when you think about it, it really is a timestamp and that does mean 8 bytes, agreed... Now, an 8 byte number is basically a 64 bit number, so if we work on say select power(2,63)  (leaving the high order bit - for expansion) then we get :

the same as a 8 byte "bigint" maximum : 9,223,372,036,854,775,807

select ( 9223372036854775807 / (1000000000 * 365.24))    -- that is a billion transactions per day, every day of the year

then you will have to worry in about 25,252,900 years - and that allows seven and a bit years to fix... probably take that long to run the long overdue archiving utility.

Don't think you have to worry in a big hurry, and do you really want to add more predicates to a query if you do not have to ?
Mark WillsTopic AdvisorCommented:
@@DBTS is varbinary, returns a rowversion - just a timestamp data type really.  and in theory, suppose it could overflow - considering it is not released with a delete or a rolled back transaction...

If you have already set up a kind of dependancy on a "high water mark" for determining "transaction since", then yes, would be worthwhile using the start and end range - kind of self fulfilling prophecy in that regard.

However, there is a bit of a price to pay - especially if you have the transaction volumes that may warrant "special considerations" for potentially overflowing the rowversion.

Personally, I wouldn't worry about it, or, if I was, then would incorporate into my weekly "health chack" and report on progress / consumption rates and make a judgement call later. I would be inclined to comment the code areas that may be affected, and handling overflows / wrap around numbers is an absolutel pain in the proverbial, so prefer not to have to except by exception...
Ken_LyonAuthor Commented:
It's a tricky one, because there wouldn't be any exception generated. If you imagine the range of values were 0 to 10, then you might have the following values:

First run: Start=5, End=7
Second run: Start=7, End=9
Third run: Start=9,End=2

When the third run is processed, there would be no exception. All that would happen is that the query would return zero rows. That is, if you were using filter logic like "give me all rows with a timestamp greater than Start and less than End."

I think for peace of mind, I'm going to write my SQL logic to use the two different conditions depending on whether End>Start or vice versa.

Realistically, though, I think it would take YEARS for this value to overflow.

It's interesting you say DBTS is varbinary. I see that too in MSDN, yet MIN_ACTIVE_ROWVERSION is binary(8). I think there is an upper limit of 8 bytes for a timestamp, meaning that it must overflow eventually. If there's an upper threshold to the size of the value and all that's ever done is incrementing it, then it stands to reason it must overflow at some point.
Ken_LyonAuthor Commented:
It's a fair point. I suppose there's just that side of me that knows it's not entirely robust logic. I suppose the key fact is that it will "practically" never overflow. i.e. Not in our lifetimes and regardless of anything else - my database isn't so good that it'll still be around after that time either.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.