Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium


Does @@DBTS ever overflow?

Posted on 2008-10-24
Medium Priority
Last Modified: 2012-08-14
I'm using Timestamp columns to keep track of changes in my database and periodically process rows with a value between my previous "high water mark" and the current value of @@DBTS. I would have a start value (lower bound) to be the previously used upper bound and then the new upper bound would be the current value of @@DBTS.

This is working fine, but I was just wondering...isn't @@DBTS going to overflow at some point?

@@DBTS is BINARY(8) which is a finite size and granted it's a really big number there's still a ceiling there. Sooner or later, what I expect to happen is that the value of @@DBTS resets to zero and starts again. At that point, my existing logic would miss the changes in that batch.

I reckon I just need to be aware of this and write into my code to use two different conditions.

Normally (when start < end) use this logic:

SELECT * FROM [TableName] WHERE Timestamp > @Start AND Timestamp < @End

Overflow condition (when end < start) use this logic:

SELECT * FROM [TableName] WHERE Timestamp > @Start OR Timestamp < @End

Can anyone confirm that I should bother checking for end < start and use these two different conditions or would most people not bother? I think it's a bit academic as I may be long dead and buried before our db ever reaches that value - but I like to think of best practices. :)
Question by:Ken_Lyon
  • 2
  • 2
LVL 51

Expert Comment

by:Mark Wills
ID: 22798267
@@DBTS is varbinary, returns a rowversion - just a timestamp data type really.  and in theory, suppose it could overflow - considering it is not released with a delete or a rolled back transaction...

If you have already set up a kind of dependancy on a "high water mark" for determining "transaction since", then yes, would be worthwhile using the start and end range - kind of self fulfilling prophecy in that regard.

However, there is a bit of a price to pay - especially if you have the transaction volumes that may warrant "special considerations" for potentially overflowing the rowversion.

Personally, I wouldn't worry about it, or, if I was, then would incorporate into my weekly "health chack" and report on progress / consumption rates and make a judgement call later. I would be inclined to comment the code areas that may be affected, and handling overflows / wrap around numbers is an absolutel pain in the proverbial, so prefer not to have to except by exception...

Author Comment

ID: 22810859
It's a tricky one, because there wouldn't be any exception generated. If you imagine the range of values were 0 to 10, then you might have the following values:

First run: Start=5, End=7
Second run: Start=7, End=9
Third run: Start=9,End=2

When the third run is processed, there would be no exception. All that would happen is that the query would return zero rows. That is, if you were using filter logic like "give me all rows with a timestamp greater than Start and less than End."

I think for peace of mind, I'm going to write my SQL logic to use the two different conditions depending on whether End>Start or vice versa.

Realistically, though, I think it would take YEARS for this value to overflow.

It's interesting you say DBTS is varbinary. I see that too in MSDN, yet MIN_ACTIVE_ROWVERSION is binary(8). I think there is an upper limit of 8 bytes for a timestamp, meaning that it must overflow eventually. If there's an upper threshold to the size of the value and all that's ever done is incrementing it, then it stands to reason it must overflow at some point.
LVL 51

Accepted Solution

Mark Wills earned 500 total points
ID: 22811759
Well, when you think about it, it really is a timestamp and that does mean 8 bytes, agreed... Now, an 8 byte number is basically a 64 bit number, so if we work on say select power(2,63)  (leaving the high order bit - for expansion) then we get :

the same as a 8 byte "bigint" maximum : 9,223,372,036,854,775,807

select ( 9223372036854775807 / (1000000000 * 365.24))    -- that is a billion transactions per day, every day of the year

then you will have to worry in about 25,252,900 years - and that allows seven and a bit years to fix... probably take that long to run the long overdue archiving utility.

Don't think you have to worry in a big hurry, and do you really want to add more predicates to a query if you do not have to ?

Author Comment

ID: 22811796
It's a fair point. I suppose there's just that side of me that knows it's not entirely robust logic. I suppose the key fact is that it will "practically" never overflow. i.e. Not in our lifetimes and regardless of anything else - my database isn't so good that it'll still be around after that time either.


Featured Post

Receive 1:1 tech help

Solve your biggest tech problems alongside global tech experts with 1:1 help.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Ever wondered why sometimes your SQL Server is slow or unresponsive with connections spiking up but by the time you go in, all is well? The following article will show you how to install and configure a SQL job that will send you email alerts includ…
This article shows gives you an overview on SQL Server 2016 row level security. You will also get to know the usages of row-level-security and how it works
Via a live example combined with referencing Books Online, show some of the information that can be extracted from the Catalog Views in SQL Server.
Via a live example, show how to set up a backup for SQL Server using a Maintenance Plan and how to schedule the job into SQL Server Agent.
Suggested Courses

578 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question