Isolation level Repeatable Read and Select/Update cycle

Test case:
Setup
create table test (a integer, r rowversion);
insert into test values (1, null);
commit;

Open in new window

Then doing this simultanously in two sessions:
set transaction isolation level repeatable read
set implicit_transactions off
set nocount on
declare @r rowversion
declare @a int
begin tran
select @r=r, @a=a from test
print convert(varchar(20), getdate(), 114)+' Select ok - '+cast(@a as varchar(5))
waitfor delay '0:0:5'
print convert(varchar(20), getdate(), 114)+' wait ok'
update test set a=@a+1 where r = @r
print convert(varchar(20), getdate(), 114)+' update ok'
commit
select * from test

Open in new window

When using isolation levels "repeatable read" or higher, it should not be possible to read a row someone else already has selected in the same IL (but other transactions) - this is the behaviour we need.
But I can. Why? The BOL states for RR IL:
This prevents other transactions from modifying any rows that have been read by the current transaction.
which I understand that the SELECTs should pass, but the first UPDATE error out.

What I get is a deadlock in the second session. I would have expected the SELECT of the second session having to wait until the first transaction is ended, but that isn't the case.
Worse, if the second transaction waits longer (say 30 seconds), the UPDATE (!) of the first also waits until the UPDATE attempt is made by the 2nd, and by then triggers a deadlock for the 2nd.

The issues now are:
a) the BOL and what I see do not fit together
b) the only way to get a SELECT waiting if another transaction selected the same row is by using a SELECT ... WITH (XLOCK), but that would require me to modify ALL SQLs going thru my DB access layer (an unmanaged C++ DLL) in RR IL. Or to manually locate the places we wish this behaviour, and add the lock hint.
c) SQLs should not have to be modified (much), as the same SQLs are running against different DBMS (like Oracle).

I've tested this on 2008r2 (with DB in 2000 mode), but think it is the same for any MSSQL release.
LVL 73
QlemoBatchelor, Developer and EE Topic AdvisorAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

hspoulsenCommented:
You may want to add "with (tablock)" to your select statement, in order to get a lock on that table.

What I've done when I had a busy table, where different processes should not block each other is:

1)
Enabled ReadCommited SnapShot Isolation
(this way readers do not block writers and vice versa)
SQL Server 2008 R2 only I think. This feature alone is worth the upgrade.

2)
change my select statement to
SELECT @r=r, @a=a from test  T WITH (READPAST, ROWLOCK, XLOCK)

this way process B can read past the rows that process A locked, and get its own row to work on.

I do not know if the locking hints exist on a SQL Server in 2000 mode, but that is easy to find out. Just try it.

Best regards,
Henrik
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
No, tablock is too much. I only want to make sure the same row cannot be read twice. Neither of the suggestions will help with this. And it doesn't explain whether the BOL is wrong or my interpretation of it.
0
hspoulsenCommented:
Well, locks are only held as long as they are needed.
So as soon as the SELECT is done, the read locks are released.

That is why I suggest using WITH (READPAST, ROWLOCK, XLOCK) on the tables where you have race-conditions.

I've never used "set transaction isolation level repeatable read" on a production system, because it just blocks too much.

May I recommend that you watch some of the videos from SQLSkills;
http://www.sqlskills.com/sql-server-resources/sql-server-mcm-training-videos/

for instance this one I find is good:
http://technet.microsoft.com/en-us/sqlserver/gg508892.aspx
0
Newly released Acronis True Image 2019

In announcing the release of the 15th Anniversary Edition of Acronis True Image 2019, the company revealed that its artificial intelligence-based anti-ransomware technology – stopped more than 200,000 ransomware attacks on 150,000 customers last year.

QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
This is an abstract database layer. I cannot go and modify all SQLs depending on whether they are in a transaction or not, because those SQLs are not under my control. I really do not like the idea having to parse SQL strings for the FROM clause just to automatically insert lock hints.
On another note, within a transaction read (share) locks are only released for isolation levels below repeatable read.
And lastly, READPAST prevents from reading specific rows if they are in use - that is absolutely the wrong thing to do for some tasks, e.g. if the app gets notified about a change in a row, and wants to process that change. So, your suggestion can only be used with extreme care, and if you know very well what you do.
0
tliottaCommented:
Since the scenario you describe is practically guaranteed to result in a deadlock, it's not clear what the problem is. You might review Microsoft's Lesson 1: Understanding the Available Transaction Isolation Levels article. In particular, read the subtopic 'Repeatable Read Isolation Level' in the 'Choosing the Correct Isolation Level' section.

Spend time to understand the area that includes:
One important thing to note is that, because the repeatable read isolation level is achieved using blocking of the other transaction, the use of this isolation level greatly increases the number of locks held for the duration of the transaction. This, in turn, can cause serious performance problems due to blocked transactions. It also greatly increases the risk for deadlocks.
In your scenario, one process sets some locks and does some work. While that work is happening, the other process starts doing the same. Then the first process tries to set a new block of locks, but it only succeeds with some of them before running into one of the locks from the second process, so it waits.

The second process is still establishing its series of locks, so it keeps going. Sooner or later, though, it runs into a lock established by process #1 (while process #1 is waiting for process #2 to release its locks. (But the process #2 locks that have been established so far aren't going anywhere because process #2 needs to wait for process #1 to finish. (But the process #1 locks aren't going anywhere because process #1 needs to wait for process #2 to finish. (But ... )))...

Tom
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
tliotta,

Doesn't seem as if you got the main points of my question. It isn't about lock contention or deadlocks - I'm well versed in database theory, and know the cons. And as already said, I cannot change (much) about how the queries are built and run, and the transactional context, as both is outside of my scope.

The "Lession 1" describes for IL RR exactly what I expected, but it isn't that way - and I want to know why. I also want to know if there is a different way than having to insert XLOCK hints.
0
ste5anSenior DeveloperCommented:
From BOL:
REPEATABLE READ

Specifies that statements cannot read data that has been modified but not yet committed by other transactions and that no other transactions can modify data that has been read by the current transaction until the current transaction completes.

It does exactly allow what happens in your case. Both SELECT statements are allowed to read the data. Due to the WAITFOR of the first transaction the second SELECT can read with out wait, cause the first transaction is in the WAITFOR and there are no blocking locks.

What I get is a deadlock in the second session. I would have expected the SELECT of the second session having to wait until the first transaction is ended, but that isn't the case.

The locks blocking the read are not issued right now. You'll get what you expect WITH ( UPDLOCK ) in SELECT statement.

What happens is that the first UPDATE (LCK_M_U) gets blocked by the second SELECT. The second UPDATE (LCK_M_X) gets blocked by the first UPDATE. Due to the different locks the second transaction is chosen as deadlock victim. Now the first transaction can continue.

The only open question is: Why do you need REPEATABLE READ in the first place?

I think that you want to take a look at SERIALIZABLE instead.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Vadim RappCommented:
> I would have expected the SELECT of the second session having to wait until the first transaction is ended, but that isn't the case.

After your first session executes select @r=r, @a=a from test , the 2nd session would have to wait to UPDATE it, but not to SELECT. SELECT in the 2nd transaction would be waiting only if the 1st one already executed UPDATE.

If by the time any of them has reached UPDATE, they both have read the data, then they both can't update it.

1: select - ok
2: select - ok because 1 did not update yet
1: update - wait because 2 already read the data
2: update - wait because 1 already read the data
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
Have you considered snapshot isolation mode instead?

it looks like it might Sölve your issue.

regards Marten
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
Ok, I think I got my error in reasoning. Of course RR does not prevent from reading rows at all, only from changing them if read by another transaction, and the description in http:#a40011252 and http:#a40011341 are correct about that.

IL Serializable is too much - we do not want to have table integrity, only row integrity (skipping a row in work is an option here, but not in general). A snapshot wouldn't help either.

So really looks like I have no other way than to modify the SQL strings I get from application code to contain lock hints (either UPDLOCK or XLOCK).
0
Vadim RappCommented:
The usual difficulty is in scenario "user acquires the lock and then crashes/leaves for lunch/forgets/gets on the phone/.... Basically, that's why client-based cursors were introduced - no locks, but when writing back, verify if the row is still the same.
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
I'm confused
Doesn't this solve your gordian knot?
Quote. "Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A unique transaction sequence number identifies each transaction, and these unique numbers are recorded for each row version. The transaction works with the most recent row versions having a sequence number before the sequence number of the transaction. Newer row versions created after the transaction has begun are ignored by the transaction".
i e snapshot isolation, link: http://msdn.microsoft.com/en-us/library/tcbchxcb(v=vs.110).aspx

Have you confused me with someone "off the charts suggesting snapshot in VMWare/Hyper-V"?

Regards Marten
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
I have automated processed running multi-threaded, and also interactive GUI stuff, but the latter isn't of any interest here - I would kill anybody leaving a transaction open while waiting for user input ;-).

I don't trust in Snapshot Isolation, because I see a lot of issues arising from it on Oracle and SQLBase (which tries to imitate that at high costs). A locked row is a locked row for me, with a clear "hands off" policy. But maybe I'm locked up in my very conservative relational knowledge of its real-life beginnings in the early 90s.
0
Vadim RappCommented:
>  I would kill anybody leaving a transaction open while waiting for user input ;-).

Here's a way to solve it in more peaceful way:

Resolving blocking conflicts in SQL Server in user-friendly manner
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
I'd suggest you are too conservative.

SQL 2014, is aiming att only this type of row versioning as it's default isolation mode.
I'm actually unsure if other isolationmodes is choosable out of the box as it seems right now.

This is a polished, and enhanced development of the snapshot isolation level. You ought to try it in a testenvironment, put some pressure on the transactions, and see if your hardware is strong enough. If so, bye all meand implement it.

It's possible to undo, and revert back, so the descision is not carved into a stone.

Regards Marten
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
Still, I have to support MSSQL 2000 :p. Yes, I know, that is completely off, and with some political pressure not a restriction for long, but a lot of work. The Snapshot Isolation tests are on my wish list and working queue, but both are very, ehm, extensive.
0
Vadim RappCommented:
>  MSSQL 2000 is completely off

I wonder if there's much you can't do on it that you really need to.
0
hspoulsenCommented:
<<Still, I have to support MSSQL 2000

I know the feeling.
It is really difficult to get rid of.
But to get ReadCommited SnapShot Isolation (so readers do not block writers and vice versa) SQL Server 2008 R2 only I think.

This feature alone is worth the upgrade.
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
Well well
SQL 2008R2 supports the compability level 80 (SQL 2000 syntax that is). And it can be combined with snapshot isolation level. I just tried this:
USE [master]
GO
ALTER DATABASE [test] SET COMPATIBILITY_LEVEL = 80
GO
ALTER DATABASE [test] SET ALLOW_SNAPSHOT_ISOLATION ON
GO
On a SQL 2008R2, and it works like a charm.

Unless the application itself does a:
select @@version
You ought to be good to go.

I recommend you try it on a developer edition SQL2008R2, it's like 50$, and used for development. This license is personal (you can run X number of dev ed sql), and not for production SQL. I e production, reports and so forth can not use this.

Alternatively you can install a 180 days trial, and see if this solves your problem. There is no sence in staying in a unsupported version.

Regards Marten
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
It is the other way round: There are some MSSQL 2000 installations out there in the productive world on different sites. Our applications don't have issues with MSSQL 2012 compatibility level, and getting rid of having to stay compatible with MSSQL 2000 isn't an idea I won't follow.
0
Scott PletcherSenior DBACommented:
>>
I just tried this:

USE [master]
GO
ALTER DATABASE [test] SET COMPATIBILITY_LEVEL = 80
>>

Woah ... you didn't really just change the master db to a different compat level, did you?  Not a good idea.  Create a dummy user db for that kind of testing.
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
Hi Scott!
the command
ALTER DATABASE [test] SET COMPATIBILITY_LEVEL = 80
changed the compability mode for the database [test] to 80.
The snapshot setting is a server level option, thats why I put myself in the master database.

I stress, it affects the [test] database, NOT the master.

Regards Marten
0
Scott PletcherSenior DBACommented:
Oops, sry, a little sleepy today.
0
Marten RuneSQL Expert/Infrastructure ArchitectCommented:
Happens to the best of us, just did I recon.

Regards Marten
0
QlemoBatchelor, Developer and EE Topic AdvisorAuthor Commented:
Sorry for being late in closing this.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.