salan_alani
asked on
Very long process (DataAdapter.Update), how to avoid ContextSwitchDeadlock (without simply disabling the exception)? (Urgent!)
I have 16 commands like this:
OleDbDataAdapter.Update(Da taTable);
and each one is trying to insert about 10,000 rows to a Microsoft Access 2000 table.
So for sure, I will get the ContextSwitchDeadlock exception while debugging.
My question is how to make my application as optimized as possible to avoid throughing this exception without simply disabling it from the Debug --> Exceptions --> Managed...
And by the way, does the Release Build of the appplication will through the same exception while running?
Thanks in advance
OleDbDataAdapter.Update(Da
and each one is trying to insert about 10,000 rows to a Microsoft Access 2000 table.
So for sure, I will get the ContextSwitchDeadlock exception while debugging.
My question is how to make my application as optimized as possible to avoid throughing this exception without simply disabling it from the Debug --> Exceptions --> Managed...
And by the way, does the Release Build of the appplication will through the same exception while running?
Thanks in advance
ASKER
I know how to turn this feature off and I already mentioned that in my post. What I want is how to make my application as optimized as possible to avoid throughing this exception. In other words, how to meet all the rules that avoid detecting the deadlock. For example, how to use the pumping window or how to use the threading in my application where I am using a 16 commands like this:
OleDbDataAdapter.Update(Da taTable);
and each one is trying to insert about 10,000 rows to a Microsoft Access 2000 table.
OleDbDataAdapter.Update(Da
and each one is trying to insert about 10,000 rows to a Microsoft Access 2000 table.
So, you're comfortable with the fact that you're not really dealing with an error... and are really just trying to optimize your code.
Aside from the fact that the process takes a lot of time, it's also probably consuming a lot of memory. So breaking up the Updates into more managable chunks will solve the "so-called" error and also reduces the memory footprint.
One of the features that's new to the Dot Net Framework 2.0 is the ability to perform a Fill() incrementally. This allows you to process the updates in sizable chunks (rather than all rows at once). Here is a short (partial) code snippet to demonstrate the technique that I use
' Since some tables can be quiet large, we break them up
' into smaller chunks.
For i = 0 To row_count Step max_rows
dts(0).Clear()
da.Fill(i, max_rows, dts)
' mark the rows as "Added"
for each dr in dts(0).Rows
dr.SetAdded()
Next
da.Update(dts(0))
Next
If you need more speed than that, then I'd consider using the "classic" ADO drivers for Microsoft Access. They are connection-based rather than the disconnected model of ADO.Net. In extreme cases with over a half million rows for a single table, I've used both techniques... the incremental fill and "classic" ADO
Aside from the fact that the process takes a lot of time, it's also probably consuming a lot of memory. So breaking up the Updates into more managable chunks will solve the "so-called" error and also reduces the memory footprint.
One of the features that's new to the Dot Net Framework 2.0 is the ability to perform a Fill() incrementally. This allows you to process the updates in sizable chunks (rather than all rows at once). Here is a short (partial) code snippet to demonstrate the technique that I use
' Since some tables can be quiet large, we break them up
' into smaller chunks.
For i = 0 To row_count Step max_rows
dts(0).Clear()
da.Fill(i, max_rows, dts)
' mark the rows as "Added"
for each dr in dts(0).Rows
dr.SetAdded()
Next
da.Update(dts(0))
Next
If you need more speed than that, then I'd consider using the "classic" ADO drivers for Microsoft Access. They are connection-based rather than the disconnected model of ADO.Net. In extreme cases with over a half million rows for a single table, I've used both techniques... the incremental fill and "classic" ADO
ASKER
Thanks graue, it's a great idea. But could you tell me approximately what is the max_rows value that I should use for a normal speed PC (INTEL P4 1.7 GHz, 256 KB Cache Memory, 512 MB RAM, Windows 2000 Server Edition, ...). Although my tables have approximately 30 columns for each one. Or if there is a special way to calculate this value, or if it is possible to tell me the value that you used with your projects, so I could have an idea about how big/small the value is.
Regarding the classic ADO drivers for MS Access, I really don't have any idea on how to use it. So, do you have any article links on this subject to learn it and see if it will help me increasing the speed of my application.
The thing that my application is taking almost all the CPU and Memory usage, and I am trying to optimize it as much as possible. So I am waiting for your feedback please..
Thanks for your help..
Regarding the classic ADO drivers for MS Access, I really don't have any idea on how to use it. So, do you have any article links on this subject to learn it and see if it will help me increasing the speed of my application.
The thing that my application is taking almost all the CPU and Memory usage, and I am trying to optimize it as much as possible. So I am waiting for your feedback please..
Thanks for your help..
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
It was really great help, thank you very much
To turn off this rather annoying error, simply change a setting from the Debug menu. Click on the Exceptions menu item to lauch a menu. Drill into "Managed Debugging Assistants" and uncheck "ContextSwitchDeadlock"