Isolation levels at a glance - Part 1/2

Mike EghtebasDatabase and Application Developer
Typically a database has multiple concurrent users making data requests or edits. These activities, without isolation, often interfere with each other. This is where isolations are helpful to be used with transactions, they help to reduce user lock outs and unexpected data anomalies.

Where isolation fits in T-SQL?


Figure 1: Isolation is a property in transaction.

  • Atomicity: “An atomic transaction is either fully completed, or is not begun at all.” Imagine a user has coded a transaction to sell a car where the code deducts car inventory by one (to make sure we have one in the inventory) before checking customer’s credit to either commit or roll back the transaction. All these steps together are a part of an atomic transaction; using Try/Catch block generally included with the transactions rolls back in an event of some unexpected errors.
  • Consistency: This means that if a few sessions simultaneously work on the same data, the results are always correct for each transaction. The results of queries are consistent and simultaneous data read or edit do not interfere with each other.
  • Isolation: When there are multiple users working with the same data concurrently, deadlocking and data anomalies are highly likely. Isolation is number of techniques to include with the transactions to avoid dead locks and manage bad data acceptability.
  • Durability: "A transaction is durable in that once it has been successfully completed, all of the changes it made to the system are permanent. There are safeguards that will prevent the loss of information, even in the case of system failure. By logging the steps that the transaction performs, the state of the system can be recreated even if the hardware itself has failed. The concept of durability allows the developer to know that a completed transaction is a permanent part of the system, regardless of what happens to the system later on."


First it’s important to understand what a transaction is.  By definition, a transaction is block of code like those shown below where some tasks performing is performing transactions in a logical order against a database, manually [figure 2] and programmatically [figure 4]. 

A transaction must exhibit ACID properties (atomicity, consistency, isolation, and durability). Although, technically, it’s possible to perform a transaction manually, it is incredibly cumbersome. 

The following demo of a manual transaction gives us a clear sense of how a data anomaly is possible in a transaction. Because this is a manual transaction, the user mentally maintains the ACID properties as required. The focus of this article is on isolations; for more information on transactions click here.


Manual Transaction: In the image below we have two users who work simultaneously with the same data present in Cars table. User 1 is recording a transaction (sale of a car to a buyer) manually. Another user (User 2) is just checking cars inventory level. Here is the sequence of the users' actions:

001.pngFigure 2: Manual Transactions causing data anomalies.


Figure 3: Table Cars to demo examples in this article.


While User 1 was working on table Cars, User 2 simultaneously was check the inventory level of the cars. As you can see outputs in figure 1 above, these two users get different results for the inventory level present in the same table. User 2 gets 14 instead of 15. But because we have a bird-eye view of the proceedings, we very well know that 14 is a bad data (this is known as dirty read).

To check this for yourself, please type codes in sessions A and B shown above and then execute them both in order of A and then B within 4 second of each other. There are three other data anomalies (Lost Update, Nonrepeatable Read, and Phantom Read) besides dirty read we see here. Later, Table 1 shows how to control these data anomalies with a proper choice of five isolation type also listed in this table.

Programmatic Transaction: A programmatic transaction starts with "BEGIN TRANSACTION" command and ends with either "COMMIT TRANSACTION" (if the credit was approved, for example) or "ROLLBACK TRANSACTION" (if the credit had been denied). The programmatic transactions usually use Try/Catch error handling blocks to roll back the transaction in an event there was some other errors. For brevity, in the later examples, the Try/Catch block will not be displayed. 


Figure 4: Programmatic Transaction with Try/Catch error handling blocks.


Data Anomalies and Isolation Types: Table 1 below shows four data anomalies (Dirty Read, Lost Update, Nonrepeatable Read, and Phantom Read ​) in the top row along with five isolation types (Read Uncommitted, Read Committed, Repeatable Read, Snapshot, and Serializable). Depending in our needs, if occurrence of Dirty Read is acceptable we can apply Read Uncommitted isolation type. But, if we need to avoid the occurrence of Dirty Read Dirty Read, we need to use Read Committed. The other three (Repeatable Read, Snapshot, and Serializable) could be utilized to do the same but with a higher Resource Use as indicated in column "Resource Use" of table 1. 

So far, we have discussed Dirty Read anomaly. Here we will cover all data anomalies first to illustrated the nature of each anomaly and then discuss means of having them under control via an appropriate isolation type. 

Table 1: Isolation Type and Data Related Problems.IsolationTable.png

Data Anomaly 1 of 4: Dirty Read 
Below, using a manual transaction a dirty read case illustrated in figure 5 followed a solution in figure 6 with a automated transaction using an isolation technique to control this data anomaly.


Figure 5: Manual Transaction, User 2 at session B gets a dirty read, 14 instead of 15.


There are three types of Dirty Read:
1. Session B can read data was never committed, illustrated above.
2. One session could miss reading some rows when it simultaneously works with another session updating the same data.
3. One session could read some rows twice when it simultaneously works with another session updating the same data.

"Another session" referred in items 2 and 3 above happens when it is editing the same data; as a result of which, the positions of the data rows changes upon database engine choosing. Therefore, a row of data before being read by "One session" could never see some rows (item 2) or re-read a data row already read before (item 3).


Note: Figure 5 here is identical to figure 2. It has been repeated here to maintain continuity. For the sequence of the users' actions for figure 5, see "Manual Transaction" heading above.

How to Control Dirty Read: Session A does whatever and however its activities, with or without isolation in any type of transaction, manual or automated. We only want to control dirty read in session B only.

If we can tolerate Dirty Read in session B, then we would apply Read Uncommitted isolation level, otherwise, we need to apply Read Committed, as shown at line 6 of session B. 


Figure 6: Dirty Read  is controlled in session B using Committed Read isolation type.

Data Anomaly 2 of 4: Lost Update

In the image below, User 1 and User 2 in sessions A and B simultaneously are selling cars. User 1 sells one car and within second of that, user 2 also sells 5 cars. 

User 1 Activity:  

At line 5, Time 00:00:00      User 1 stores car inventory in @Qty variable for CarId = 1.
At line 9, Time 00:00:00      Credit check causes a 5-seconded execution delay.
At line 11, Time 00:00:05    Credit goes through and table Cars get updated to @Qty -1.
At line 15, Time 00:00:05    User 1 double check the current the car inventory (14 cars left).

User 2 Activity:

At line 6, Time 00:00:01      User 2 stores car inventory in @Qty variable for CarId = 1.
At line 10, Time 00:00:01    Credit checked right away, table Cars get updated to @Qty -5.
At line 15, Time 00:00:01    User 2 double check the current the car inventory (10 cars left).


Figure 7: Lost Update with no isolation type applied.

Although total seems to be 6 (= 1 sold by User 1 + 5 sold by User 2) but while session was in progress, the update via session 2 never gets recorded in the table Cars. The update gets lost. This implies that after these two transactions, you will see that 14 cars are left. As soon as session A starts, it locks  out session B and session B was spinning its wheels and falsely reporting Qty = 10 without actually being table to update table Cars.

How to Control Lost Update: Here unlike the previous example, we need to include "Set Transaction Isolation Level Repeatable Read" in both sessions. Session A which in our example runs first has to know in advance to lock the record(s) in question in advance to lock them to ensure repeatable reads later on. When session B is executed right after session be, session B produces and error stating the record in question is locked. This is how we can guaranty repeatability of the data for session A.


Figure 8: Lost Update is controlled using Repeatable Read isolation type in both sessions.

Note: Repeatable Read is an isolation type whereas Nonrepeatable Read is a data anomaly.


Data Anomaly 3 of 4: Nonrepeatable Read
Here User 1 is puzzled for getting different inventory numbers for CarId = 1 although there was no apparent change to the data. But it is easy to guess some other user (such as User 2) have made changes to the data. 


Figure 9: Nonrepeatable Read without isolation type


User 1 Activity:  

At line 5, Time 00:00:00      User 1 checks car inventory for CarId = 1  (shows 15 cars).
At line 8, Time 00:00:00      User 1 performs some other taks for 5 seconds.
At line 11, Time 00:00:05    User 1 checks again the inventory for CarId = 1 (shows 14 cars on a second read).

User 2 Activity:

At line 5, Time 00:00:01      User 2 sells a car for CarId = 1.
At line 7, Time 00:00:01      User 2 checks the car inventory for CarId = 1 (14 cars left).

How to Control Nonrepeatable Read: If it is desired to keep data readouts consistent until User 1 is done with his/her activities, then adding lines 3, 4, and 11 are required in session A. With these lines in place, User 2 gets locked out until the 5th second. Then it displays the expected result (Qty = 14) and there will be no Lost Update in session B.


Figure 10: Nonrepeatable Read with Repeatable Read isolation type.


Here again, "Set Transaction Isolation Level Snapshot", or "Set Transaction Isolation Level Serializable"  will do the same with some extra resources. But if lack of Repeatable Read is acceptable, you may apply Read Uncommitted or Read Committed instead.

Data Anomaly 4 of 4: Phantom Read
User 1 checks car inventory two times (line 6 and 10) without changing data in table Cars, but in the second try a new car row (Cadillac/ 7) shows up. This is known as Phantom Read.

00Example_P.pngFigure 11: Phantom Read without isolation type.

User 1 Activity:  

At line 5, Time 00:00:00      User 1 checks car inventory for CarID <=3 (shows 15-BMW and 30 VW).
At line 8, Time 00:00:00      User 1 performs some other taks for 5 seconds withou changing Cars table.
At line 11, Time 00:00:05    User 1 checks again car inventory for CarID <=3 (shows 15-BMW, 30 VW, and 7-Cadillac).

User 2 Activity:

At line 5, Time 00:00:01      User 2 inserts a new car Cadillac with CarId = 2.
At line 7, Time 00:00:01      User 2 checks the car inventory for CarId <= 3 (shows 15-BMW, 30 VW, and 7-Cadillac).

How to Control Phantom Read: Although we know why this is taking place and that User 2 in fact has added this new row; but if it is desired a Phantom Read like this not to show up in session A, then we need to apply Snapshot or Serializable isolation to session A. 
00Example_P_Snapshot.pngFigure 12: Phantom Read with Snapshot isolation type.

Range block is applied to CarID <= 3. If  User 2 had used the following code to insert, User 1 would have not experienced any Phantom Read problem.

Insert into(Cars) values (4, 'Cadillac', 7);

Open in new window

Because CarID = 4 is outside of CarID <= 3, User 2 at session B would have not experienced blocking neither.

Note: To apply Snapshot Isolation Type, you need to enable it first by running:


Open in new window


Table 2: .....

Note: Bear in mind if we don't treat our code with any isolation type, it will be the same as applying Read Committed isolation type since it is the default isolation type. 


Table 3: Isolation Level Matrix ...IsolationMatrix.png

In-memory aspect of a database is a technology that is improving year after year; hence, isolation techniques need to be modified to perform accordingly. So, consider the techniques described in this article work in progress because they could be modified in the future. What you have above was first is a description of data anomalies that happens with concurrency and then demo of isolation techniques help maintain an acceptable level of control over these data anomalies. In part two of this article, we will discuss how these isolation techniques use row versioning, rowlock, etc. to perform accordingly.


Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.

Mike EghtebasDatabase and Application Developer

Comments (0)

Have a question about something in this article? You can receive help directly from the article author. Sign up for a free trial to get started.

Get access with a 7-day free trial.
Continue Growing Your Skills and Your Career
  • Interact with leading experts on your specific technology problems.
  • Receive the guidance of experienced professionals.
  • Learn from troubleshooting others have experienced.
  • Gain knowledge from a library of courses, all included.