Handling concurrency in Webservice

I have webservice which updates the customer details I 'am using sybase 8 as as backend and .net as platform for developing webservice. while testing  in a multiuser environment I got some locked table error.
I tried setting the isolation level for database option.

For this, I have tried doing...
 when there are update/Insert/Delete actions necessary,
 immediately after starting the transaction,I  locked the tables  involved (in sequence to avoid deadlock).
 if a lock fails, I'am retrying it after x milliseconds

Even then

I a'm getting some serialization failure:deadlock detected error.

It would be a great help if somebody comes with solution to handle this concurrency.

Who is Participating?
>but can you tell me how the explicit locking can be done,
I don't know sybase but i believe its fairly close to SQL server.
Locking is too big a topic to outline in a post like this. There are lots of issues around locking. When you insert or update a record, you may need to lock records in the data table and possibly sevaral pages for each index that is potentially affected. It also depends on the the isolation level required and the number of records affected.

>so that I could lock the table,
You really do not need to do this. The databases takes care of this and it does it very well.

Here are a few extracts from the overview page on locking from the SQL server docs.
SQL Server locks are applied at various levels of granularity in the database. Locks can be acquired on rows, pages, keys, ranges of keys, indexes, tables, or databases. SQL Server dynamically determines the appropriate level at which to place locks for each Transact-SQL statement. The level at which locks are acquired can vary for different objects referenced by the same query; for example one table may be very small and have a table lock applied, while another, larger table may have row locks applied. The level at which locks are applied does not have to be specified by users and needs no configuration by administrators.
There are several lock modes: shared, update, exclusive, intent, and schema. The lock mode indicates the level of dependency the connection has on the locked object. SQL Server controls how the lock modes interact. For example, an exclusive lock cannot be obtained if other connections hold shared locks on the resource.
SQL Server has an algorithm to detect deadlocks, a condition where two connections have blocked each other. If an instance of SQL Server detects a deadlock, it will terminate one transaction, allowing the other to continue.
SQL Server has an algorithm to detect deadlocks, a condition where two connections have blocked each other. If an instance of SQL Server detects a deadlock, it will terminate one transaction, allowing the other to continue.
If you choose to lock the tables yourself, you will probably seriously reduce the perfomance under heavy load and the likelihood of contention will be much higher.
If youa re currently experienceing locking problems and your are explicitly locking resources, then the soluton may be to remove any explicit locking. In TSQL at least, you can specify lock types (about a dozen types) in queries. On thte page detailing those, it states:
The SQL Server query optimizer automatically makes the correct determination. It is recommended that table-level locking hints be used to change the default locking behavior only when necessary. Disallowing a locking level can affect concurrency adversely.
I see this as more of a Sybase question. Handle the concurrency at the Sybase or at the driver level.
Your web services shouldn;'t have to deal with that. They make a call to the DB through ADO.NET (or whatever) and that layer or some layer below that should retry.

You may want to look into why you are seeing locking problems at all. They should be pretty rare.
If you are locking the entire table to do an insert then that may be the probme. Lock as little as you can, ideally only the record you are editing (I have no idea if Sybase supports record-level or page-level locking).
You only need to lock if there is a possibility of a real clash. For example if it is realistically possible for there to be 2 overalpping calls to update the same customer's details. Overlapping calls that update different customers should not require locking.
The DB may lock some records or pages in whenever you modify data e.g. in the pages that stroe the indexes but that is seldom a significant problem.
Another factor in locking is to minimise the amount of time the locks are held for.
Alos consider how critical it is that you avoid all clashes. If you get a dirty read (reading a partially edited record) once every million reads, is that a significant problem? In a system processing financial data it would be. If it in your case? If not, then you might well decide not to do anything about it.
ranganathanpAuthor Commented:
Hi monosodiumg,

Thanks for your reply.
Sybase does not support row-level locking (for version < 11.9) , yes I 'am using the sybase 8.0
so, in this version only page-level & table level locking is possible. The actaul scenerio is that Biztalt server sends the requests which, might send in a way which is less than 5 ms, so when a 2 or more update requests are sent to the webservice for the same customer, then there is possibility of overlapping the requests. I 'am aware of that the db may lock some records while updating , but can you tell me how the explicit locking can be done, so that I could lock the table, with some timeouts while updating a record without ending up into deadlocks.

ranganathanpAuthor Commented:
any luck dot net gurus.............?
I think I've provided enough to warrant to points on this one in the absence of any statement to the contrary from the asker.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.