Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 433
  • Last Modified:

Best isolation level for.

Hi,

I have a process that runs from a client application against an SQL Server database.
This applications reads data from TableEntries, then it performs some checks, run some storeds and after all that it decides if the entry is valid or not. Then, it deletes the record from the TableEntries and move it to either TableValids or TableInvalids.
Now, TableEntries is being filled by an eternal process, Oracle through heterogenous service, they run in READ COMMITED mode, the default.
They fill this table all day at any given moment, thay can send 100 , 500, 10000 or 20000 records in a given moment.
My process is sleeping and wake up every minute to process this data. It don't run again until it finishs its current task, so it never is running more than once at a given moment. All this have worked fine for months.

But now I've been told that this process must be able to run several instances at same time, now I have a doubt that some of my instance can re-read data that is being processed by another application instances therefore causing some invalid results.

Which isolation level should I use for :
Read 1 record from a table.
Run storeds and client processing.
Delete the record from it and save it to other two tables.

All this without any interfernce from each other application instance ?

Thanks in advance,




0
fischermx
Asked:
fischermx
1 Solution
 
LowfatspreadCommented:
you could always allow each instance to only process a distinct range of the data (if there is a reasonable key included
with the data)

e.g. select top 1 *
       from tableentries
        where key % NOOFINSTANCES = the instanceNumber   (from 0,...)

hth
0
 
sigmaconCommented:
You need to run each instance wrapped in ONE transaction at LEAST at the isolation level "Repeatable read". You may want to mark a row that is being processed by one instance at the beginning of a transaction to allow another instance to select another row - or use Lowfatspread's suggestion (which I just saw) to prevent concurrency. If you run at isolation level "Serializable" you may block instances among each other and don't really gain anything from more then one instance of the same process. It is also important to watch what kind of locks your process acquires and see whether you can rewrite it to only acquire page or even only row-level locks.
0
 
Scott PletcherSenior DBACommented:

You can still run the process in READ COMMITED mode, just use a transaction and hold an exclusive lock.  For example:

BEGIN TRANSACTION

SELECT TOP 1 ...
FROM yourTable WITH (XLOCK)

Run storeds and client processing.

Delete the record from it and save it to other two tables.

COMMIT TRANSACTION
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
Scott PletcherSenior DBACommented:
Note: The XLOCK should apply only to the single row, not a page or table.  If you want to be absolutely sure/clear about that, you can do this:

WITH (XLOCK, ROWLOCK)
0
 
Brendt HessSenior DBACommented:
I would actually take this a different way, if possible.

Looking at the incoming data, is there uniquely identifying information on each row?  If so, then I would:

(1) Create a "Processing" table
    (a) This would have the same Uniquely Identifying data fields as the incoming data table, plus one field for spid
    (b) Create a unique index on the processing table (using the same fields as the original table), and a spid index

(2) When processing, start by INSERT INTO Processing SELECT TOP 100 {Key_Field(s)}, @@spid FROM Incoming LEFT JOIN Processing ON {Key Fields} WHERE Processing.spid IS NULL (ensuring no attempt to insert duplicates)

Now, process based on the information in the Processing table, deleting the processing and Incoming data as needed.  When done, Loop until Incoming table is empty.  This can be much faster than transactions on individually processed records.
0
 
fischermxAuthor Commented:
The main challenge is that the Oracle process was currently blocking the tableentries a lot.
So, I had to do this :
     Select Top * From TableEntries (READPAST)

This is because I was getting timeouts in the client. I had to about 30 seconds and even when I raised the number to up 3 minutes I still getting timeouts, because Oracle insertion batch was taking sometimes up to 15 minutes.

Simacon : I also thought about the serializable mode, but I doubt I was making any real gain. The server is actually low in usage so it could process more rows easly, that's why I was told to change this.

Bhess1 : It was my original desgin to the process, but we never implemented cause it added some unnecesary complications we didn't need in that time.

ScottPletcher :
I'm going to test if that XLOCK,ROWLOCK are compatible with (READPAST) and let you know.

Thanks to all.





Regards,
0

Featured Post

VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now