Transaction Problem


I have a big problem !

I have an SQL 7 Database with a table with 40 columns
and 150,000 Rows.

I gets very fast Streaming data - 2000 Messages a Second
I need to Update this table with the Messages.

I Succeeded to Update 180 Updates in a a second without
Transactions, and 280 Updates with Begin and Commint Transactions each 3 Seconds.

I tried to Update from 10 Clients (Each Client Update 200
in A Second) and there was not a Big Performance Change.
In the Form load I Executed "Begin Transaction" and
in the timer(of 3 seconds) i Executed "Commit Transaction" and a row after I Executed "Begin Transaction".

When I Tried To Update with the 10 Client With Begin
and Commit Transactions, In all of the Clients  
(except one ) I got an Error -
"2147467259 - The COMMIT TRANSACTION request has no
corresponding BEGIN TRANSACTION." (But there is!)

And after 15 seconds I get this Error from the clients:
"-2147467259 - Transaction(Process ID 57) was deadlocked on {lock resources with anothe process ... the deadlock victim. Rerun the transaction."

Please Help Me.
Who is Participating?
moduloConnect With a Mentor Commented:
PAQed - no points refunded (of 300)

Community Support Moderator
Guy Hengel [angelIII / a3]Billing EngineerCommented:
How do you submit the transaction? I guess using ADO in VB or C, and my question is if you use the BeginTransaction and CommitTransaction from the ADO Connection object, or use execute the COMMIT statement explicitely?

JOKERAuthor Commented:
Hello angelll

I am Using VB 6, ADO and I Execute the Begin and Commit
Transaction like this :
    Connection.Execute ("COMMIT TRAN")
    Connection.Execute ("BEGIN TRAN")

Build your data science skills into a career

Are you ready to take your data science career to the next step, or break into data science? With Springboard’s Data Science Career Track, you’ll master data science topics, have personalized career guidance, weekly calls with a data science expert, and a job guarantee.

Guy Hengel [angelIII / a3]Billing EngineerCommented:
In fact, in VB+ADO, you need either to use the
instead, or execute the whole transaction in 1 Execute:

dim strSQL as string
strSQL = "BEGIN TRAN" & vbcrlf
strSQL = strSQL & " UPDATE your table ... " & vbcrlf
strSQL = strSQL & " UPDATE your table ... " & vbcrlf
strSQL = strSQL & "COMMIT TRAN" & vbcrlf

I guess that the second way will also improve your overall speed...

Guy Hengel [angelIII / a3]Billing EngineerCommented:
i obviously forgot the Conneciton.Execute strSQL line for the second example...

The fastest way I have found is to commit every 2000 rows. The quickest method to insert the data would be to use a stored procedure.
The other option would be to create a flat file and BULK INSERT it in
JOKERAuthor Commented:
Hi simonsabin

What do you mean by BULK INSERT?
(all the answers was not giving me more than
400 messages for a second), I need 3000...


here is my experience:


BULK INSERT datingv1r0.dbo.main_dev
FROM '\\wqw\d$\temp\main_dev.bcp'


gives me:

(50000 row(s) affected)
1900-01-01 00:00:01.893

(1 row(s) affected)

i.e. over 25k rows per second (over LAN)

table main_dev is a 15 columns table (mixed varchar, int, bit, GUID columns) and DOES NOT have ANY indexes (for the test). d:\temp\main_dev.bcp was prepared this way:

d:\>bcp "datingv1r0.dbo.main_Dev" out d:\temp\main_dev.bcp -S pgbr -T -n -C 1251

which gives me:

50000 rows copied.
Network packet size (bytes): 4096
Clock Time (ms.): total     3675

i.e. 50k rows in less than 3.7 secs (over LAN)

file main_dev.bcp is 6'260'964 in size in 'native' bulk copy format. also, mind that i'm using code page 1251 (cyrillic) for the varchar fields which might not be your case.


p.s. your results could be bettered if you drop (or at least reduce) the indexes on your table. also you might try dropping clustered index and leaving nonclustered indexes just based on the heap the table data will be.
JOKERAuthor Commented:
Hi wgw

Thanks for this answer BUT,
I do 95% Updates and 5% Inserts,
so I need a fast UPDATE.

Thank you
Thats what I meant by BULK INSERT.

Optimizeing updates, make sure you are not updating an index, especially a clustered index. IF you are make sure your FILLFACTOR is correct or you will end up with a lot of page splitting. Also try to ensure your data is of the same size. i.e putting '111' in a varchar and then updatign it to '12312312312312738123912937213' might cause a page split.

It is often quicker to DELETE and INSERT rather than UPDATE. Although as you are doing continuous updates then this is probably not the case for you.
simonsabin: "It is often quicker to DELETE and INSERT rather than UPDATE."

can you elaborate, please? when does this happen and why?

This old question needs to be finalized -- accept an answer, split points, or get a refund.  For information on your options, please click here-> http:/help/closing.jsp#1 
Post your closing recommendations!  No comment means you don't care.
When performing an update page slipts often occur that cause the data originally on one page to be split across two pages, this has a knock on impact of updating the indexes, additionally the updates will have to scan the pages held by the table to identify the pages to update, this can be a slow process. Deleting all data does just that, goes through and releases the pages assigned to the table, inserting the data then reassigns the pages and fills the data with the values required.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.