Still celebrating National IT Professionals Day with 3 months of free Premium Membership. Use Code ITDAY17

x
?
Solved

{Urgent}       what could be the probelms with unrepeatable read ,  Second lost updates problem ,Phantom read

Posted on 2008-11-02
6
Medium Priority
?
662 Views
Last Modified: 2012-05-05
 unrepeatable read ---- I understood it means A transaction reads a row twice and reads different state each time. For example, another transaction may have written to the row, and committed, between the two reads. ---- but why it is a problem , ofcourse  always the updated data is best to work with.

  Second lost updates problem ---two concurrent transactions both read a row, one writes to it and commits, and then the second writes to it and commits. The changes made by the first writer are lost  .... but why does it matter the changes made by first writer , anyway those will be lost after some point of time.

  Phantom read-A transaction executes a query twice, and the second result set includes rows that weren't visible in the first result set ----- same question as unrepeatable read.

  Please brief me with the use cases ,if any.
0
Comment
Question by:sukh54
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
6 Comments
 
LVL 27

Expert Comment

by:sujith80
ID: 22865472
>> lost updates
This needs careful sequencing of your programs.

>> why does it matter the changes made by first writer , anyway those will be lost after some point of time.
"Changes are not always lost at some point of time."

Lets see this scenario, you have a commission column. You have Two programs(transactions) updating the commission value. One program updates commission based on the existing commission values. And another updates all commission values.

Program 1 rule is - if existing commission is > 1000 then add 500 to the commission else add 10 to commission.
Program 2 rule is - Add 100 to all the commission values.

I have one record with commision value 950.

Say you are allowing phantom reads Any one of the following cases can happen.

Case 1
transaction 1 checkes commission value, 950 is not >= 1000 so adds 10 - commission becomes 960
transaction 2 adds 100 to commission - final commission is 1060

Case 2
transaction 2 adds 100 commission becomes 1050
transaction 1 sees that the commission value is >= 1000 so it adds 500 - final commission value becomes 1550


You can see that the changes are not lost, but in fact it resulted in totally inconsistant results.

Now lets see the above scenario with the cocurrency control.
program1 and program 2 are running concurrently, both are seeing the value of commission as 950.
Program 1 updates to 960 (Within the transaction, haven't committed. Other transaction is not seeing it yet)
Program 2 updates to 1050 (Within the transaction, haven't committed. Other transaction is not seeing it yet)
Depending on which program commits second that will become the value of the commission in the table. Again in consistant results!!!!

The problem lies with the sequencing of the application logic. Database doesnt know what the application does. It just follows the commands you are instructing it to do.

If you allow phantom read, the result could be anything, you cannot predict.
On the contrary if you have concurrency control(read consistant data), you can sequence your programs accordingly. So that the end results are always consistant.
0
 

Author Comment

by:sukh54
ID: 22874224
I think I got confused ...... but I believe that these are the problems that arise from the schedules which are not serializable. So .... is it that the given schdules are not serializable .... ( I think serializable means ,  results of the transactions when executed in interleaved should be equal to "some" schedule that had the transaction in order.
0
 
LVL 48

Expert Comment

by:schwertner
ID: 22877391
unrepeatable read> is a problem, because the application will get wrong data and the decission made will be wrong,.
In Oracle it is easy to avoid this changing the status of the Reader to Writer:
SELECT  .... FOR UPDATE;
If other application has also read the rows to change them, the first application will get message that they are locked.


  Second lost updates problem > also very bad thing! Available tickets are 140. The data will be inconsistent. Process A sells 20 thickets, the remain thickets are 120, process B sells 50 tickets, the remain tickets are 90. 90 is recorded, but this is wrong. In fact they should be 70 ... The same cure as above ---> locking the rows!
0
Veeam Disaster Recovery in Microsoft Azure

Veeam PN for Microsoft Azure is a FREE solution designed to simplify and automate the setup of a DR site in Microsoft Azure using lightweight software-defined networking. It reduces the complexity of VPN deployments and is designed for businesses of ALL sizes.

 

Author Comment

by:sukh54
ID: 22883398
schwertner ,
  In the Second lost updates problem , is it that  the lock ,which is used to avoid this problem , is being held by A till A is committed so that B will not access it and B should be able to access 120 and not 140.
So the Lock should be on READ also , right ?
0
 
LVL 48

Assisted Solution

by:schwertner
schwertner earned 200 total points
ID: 22884072
B should explicitly say that it reads the data for update. In this case it will get message that the row is locked.
In Oracle writers do not disturb readers, but prevent other writers from accessing the rows that will be or are updated until the rows are comited.
0
 
LVL 3

Accepted Solution

by:
nag9s2007 earned 300 total points
ID: 22884280
0

Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Introduction This article is the last of three articles that explain why and how the Experts Exchange QA Team does test automation for our web site. This article covers our test design approach and then goes through a simple test case example, how …
When it comes to protecting Oracle Database servers and systems, there are a ton of myths out there. Here are the most common.
This video shows how to Export data from an Oracle database using the Datapump Export Utility.  The corresponding Datapump Import utility is also discussed and demonstrated.
This tutorial covers a practical example of lazy loading technique and early loading technique in a Singleton Design Pattern.
Suggested Courses

704 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question