SQL Server Locking Issues

KMosher
KMosher used Ask the Experts™
on
We have several customers running our MS Access 2002 front end on SQL Server 2005 - Free version on a Win 2003 server. From Time to time certain tables/records get locks that prevent users from using them. We cannot find a pattern to this although it is always certain tables. When I look at the activity log there are no active locks, and no blocks. If we down SQL server and bring it back up everything works fine.

It appears to be worse when there are a lot of users, but not neccessarily more user in the tables that have a problem.

Our customers running exactly the same code on an Accesss .mdb backend do not have this issue. When looking at activity monitor there is never any blocking shown. Almost never any active locks. There are usually 50 or 60 Sleeping processes awaiting a command. (This is with probably about 10-20 users logged in)

One thing that is very strange that I do not understand. There will be activity for Host computers that are not even logged in to the network and haven't been for several hours. I know for a fact that these computers were shut down cleanly. Why would SQL maintain processes for computers that are not even on the network, much less logged in to SQL server?

I know this message may need further explaination. I am by no means an SQL Server expert but I am trying to learn. Let me know if you need more data.

Thanks for your time
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
see this link for a good info resource
                http://www.experts-exchange.com/Microsoft/Development/MS_Access/Q_24353007.html

Author

Commented:
I do not see how any of these apply. In the first place I am doing all of these things. We do NO pessimistic locking, and ALL transactions get all user input before beginning and if there are errors they are displayed after rollback.

As I mentioned, the even though uses are unable to write to a table the SQL activity log shows NO locks and NO blocks. If left alone the this inability to write to certain tables will generally clear itself in a half hour or so, or we cand down the server and bring it up to solve the problem.

Commented:
What makes you think this is a lock/block issue?  What error are you getting?  How large is this database?
Success in ‘20 With a Profitable Pricing Strategy

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Author

Commented:
What else would it be? Suddenly users are simply unable to create new records or edit old ones, but only in certain tables. Everyone else is working away just fine.

I don't think It is throwing errors. The users just tell me it hangs. I am not there.

About 1 gig in 250 different tables. Not that big. Maybe 30 users max. Only 4 or 5 users probably accessing the table(s) that seem to have the issues.

I was hoping maybe to find out there were bugs in SQL that would cause this or perhaps certain SQL settings to be adjusted. Or perhaps it is an issue with the free versin of SQL As I said, when you look at the activity log there are no locks evident, but the users are simply unable to edit certain records or write new ones. Down the server, bring it back up, and everything is fine again, for awhile. Also, if you just leave it be for an hour or so the problem will clear itself.

I am not on site so I am unable to sit there and watch the problem unfold.

Commented:
I've never used access as a front end but I would suspect a connection problem before a blocking problem.  I mean its either a blocking problem or not - like you said you're not seeing blocked transactions in the activity monitor.  So if one user hangs, but another user still works, can you have a third user try to do the same thing the hanging user is doing?  Have you tried killing the connection related to the user?  How about running profiler so you can try to see the last statement executed on the bad connection?

Author

Commented:
They don't have profiler. Doesn't come with the free version, I think, or if it does it is not installed. And the issue is when a table is blocked, it is blocked for all users at any computer. The issue IS with particular table(s) and those tables are accessed PRIMARILY, but not exclusively, by the users who are having the most problems.
Top Expert 2012

Commented:
>>What else would it be? Suddenly users are simply unable to create new records or edit old ones, but only in certain tables. Everyone else is working away just fine.<<
It could be a whole host of problems.  For example, it could be a deadlock situation, or perhaps the database has grow dynamically enabled and chose to do it at that time or perhaps ...

>>I don't think It is throwing errors. The users just tell me it hangs. I am not there.<<
Which indicates that it is a probably a poorly written app, for if it was a lock situation (or anything else for that matter, than it should be displaying and/or logging the error).  It may have evil constructs like On Error Resume Next, which basically states if there is an error ignore it. Or perhaps it has (God forbid) the ADO CommandTimeout set to 0, which is equivalent of an infinite wait.  I could go on, but you get the point.

What you have to discover is the following:  Was the software designed and written for MS SQL Server or was it just "upgraded" from MS Access.  If it is the latter than you will be continually plagued with these types of problems until it is re-written to fully take advantage of MS SQL Server.

>>I was hoping maybe to find out there were bugs in SQL<<
Unfortunately, with SQL Server Express you get no support from MS.  Which is alwaays one of the risks of using "free" software.

>>Or perhaps it is an issue with the free versin of SQL<<
Unlike previous versions, there are no query governors, the only restriction is the total size of the database cannot exceed 4GB and it cannot use more than 1GB memory.

>>Also, if you just leave it be for an hour or so the problem will clear itself.<<
This would allow for the database to grow situation.  Incidentally, I am not advocating that this is a good situation:  The database should always be sized so that it does not have to grow dynamically.

>>I am not on site so I am unable to sit there and watch the problem unfold.<<
You should not have to do that.  The software should contain good logging procedures.

Sorry for the bad news.
Top Expert 2012

Commented:
>>Our customers running exactly the same code on an Accesss .mdb backend do not have this issue. <<
I think this answers my question.  You may be best off migrating all MS SQL Server apps back to MS Access until you are prepared to re-design and re-write your app for MS SQL Server.

>>Why would SQL maintain processes for computers that are not even on the network, much less logged in to SQL server?<<
Perhaps you failed to close the ADO connection.

Author

Commented:
I am thinking not. This app has been running fine for over a year in SQL. Other clients running the same code are not having these issues. I just checked, and the DB is 1.95 Gigs with only 4.5 MB available so your dynamic resizing issue is looking like a real possibility. None of our other SQL customers have a DB this large and they are not seeing issues so it may also have to do with the 1gig mem limitation as well.

A also have another customer running a differnet access app that was never written for SQL. He has a very large DB but has always run the paid version of SQL with lots of RAM.

I am thinking perhaps it is time to tell the client he needs to upgrade the SQL. But we will try sizing the DB first and see if that solves the problem. What do yo think?
Top Expert 2012

Commented:
>>I just checked, and the DB is 1.95 Gigs with only 4.5 MB available so your dynamic resizing issue is looking like a real possibility. <<
Look at the SQL Server Error Log, it may show up there.  I usually look at the reports, but I am not sure if that is available in the Express Editions (Right click on the database, Reports, Standard Reports, Disk Usage).

Also check to see if it is running in Simple or Full Recovery Model.  If Full make sure you are running Transaction Log backups or the size of the Transaction Log could far exceed the size of the actual database until you run out of space.  And also could account for the "freezing" every time the Transaction Log grows.

If the database is set to 1.95GB increase it to 3GB, if the Transaction Log is 500MB increase it to 1GB.

The size per se is not important we have databases that exceed 100GB, what is important is how well they are designed.

In short, MS SQL Server is not MS Access.  It is also not as bad as Oracle, but like Oracle requires a considerable amount of maintainance and fine tuning, which usually involves a full or part-time DBA.  Anything else and you are going to be frustrated and disappointed in performance.

Commented:
Here's a couple of options for a free profiler:
http://www.mssqltips.com/tip.asp?tip=1459

SQL express only supports the simple recovery model so I don't think there's any transaction log issues here unless theres some huge bulk operations going on.

What kind of access integration are we talking here?  Custom code/ado?  External table references?  How is the connection established?  ODBC?  If so what driver are you using?  Try the "Microsoft SQL Server 2008 Native Client" driver (download: http://www.microsoft.com/DownLoads/details.aspx?familyid=C6C3E9EF-BA29-4A43-8D69-A2BED18FE73C&displaylang=en)
Top Expert 2012

Commented:
>>SQL express only supports the simple recovery model so I don't think there's any transaction log issues here unless theres some huge bulk operations going on.<<
I wish it did, we would not have so many questions here such as "Why is my Transaction Log 26GB?".  Unfortunately, it is even worse than that:  By default is set at installation to Full Recovery Model.
Top Expert 2012

Commented:
<offtopic>
Well it looks like I am not the first one to suggest to Microsoft that SQL Server Express needs to default to Simple and not to Full Recovery Model.  :(

See here for the discussion:
https://connect.microsoft.com/SQLServer/feedback/details/472969/change-default-recovery-model

I think that defaulting all editions to Simple is tad extreme, but I can be convinced. :)
</offtopic>

Commented:
I stand corrected ac, I must just be in such a habbit for switching to to simple for smaller databases that at somepoint I figured it was an express thing.

<offtopic>
I'll join your offtopic ;).  I agree with defaulting or removing it from express editions as it makes sense to help out those managing a smaller database like KMosher here.  But I'm starting to think that somethings need to be difficult in order to remind people this is an enterprise app that needs dedicated and expert attention.  Oracle is a total pain to administer, at least for somebody that doesn't do it for a living.  Consequently, i'm starting to beleive there is a higher percentage of oracle vs sql server DBs that are running at their full potential.  Not because Oracle is better, but it seems that it is much more likely to get a dedicated DBA (or 2 or 3 or 4 of them ;) ) where sql server more often ends up being one more burdon on your poor network admin.

Don't worry KMosher, we're still helping you, but I enjoyed ac's topic so I took a moment to rant ;)
</offtopic>


Top Expert 2012

Commented:
<offtopic>
Much as I dislike Larry Ellison and his corp, I have to admit you are right.  Just take a look at the type of questions on the Oracle Zone and they are radically different from MS SQL Server Zones.  Microsoft made the change to become more "user friendly" around 12 years ago when they introduced SQL Server 7.  Prior to that with version 6.x all you had was a Query Analyzer like interface.  Nearly everything was command driven and required a vast knowledge on the different options available.

Unfortunately, many MS Access developers see MS SQL Server as the big brother to MS Access and MS compounds the problem by producing a tool which is called the Upgrade Wizard or some such idiocy.  Whick makes everyone think that MS SQL Server is really MS Access "Enterprise" Edition.

I have no problem in helping people get up to speed using MS SQL Server, I do that every day here.  It is just that too often I see questions like:  "I just upgraded to MS SQL Server from MS Access and it is so slow" or "Why can't I edit a VIEW that has a JOIN, like I did in MS Access" or "I need to get the LAST/FIRST record in a table."  That is when you know that there has been a very poor effort on the part of MS to educate users on using MS SQL Server.

On the specific subject of the Transaction Log, not a day goes by here on this site that someone has to ask why their Transaction Log has grown so big and a few have actually deleted it in the very misguided view that it will just be rebuilt.  This is unfortunate, because you are then viewed as critical when you point out the error of their ways.  
</offtopic>

Author

Commented:
Guys, I want you to know I very much appreciate your discussion on the topic. This is an ongoing issue with my partner and I. The reason we chose Access is because of ease of development and the ease to which we can transition our cusotmers to SQL as they grow, and I want you to know we are glad we did it that way. We have over 350 tables, 700 forms, 500 Reports, 2000+ queries, all written in Access. To do them in any other language would have probably tripled our development time. We do use the upsizing wizard to move them to SQL when they are ready and it works beautifully once we have "cleaned" the data. Getting typically 99% of the schema upsized intact. Normally only a few relatiionships fail and I now know which ones are going to fail so I have scripts to insert them. (The upsize wizard doesn't like multi-column relationships).

I have modified our code so it runs seamlessly with either an Access or SQL back end. And yes, there are some things that ran VERY slowly when converted. A number of queries have had to be modified for SQL and in a (very) few cases there is code that does things differently depending on the back end.

But by and large I will say that Microsoft has done an excellent job of making this all work both ways. You are right of course about userfreinddliness and performance but I have to say that the time we have saved and the money we have saved our customers is enormous doing it the way we have. And our customers are generally small enough performance is not an issue. We upsize when reliability starts to beak down, not performance.

On a rock solid network with a good server Access is good to about 15-20 users. It is fast, simple for users to manage and understand. When they start having issues we can move them to SQL in as little as 4 or 5 hours depending on the quality of their data, and to the end user there is no difference. 95% of what they do runs tha same or faster after the upsize.

The difficulty is changing our customers mindset. They are used to simply backing up the Access database files and going home. They are not of a mindset that the DB needs management, that someone has to keep after things like old locks when a network connection is dropped.

Which, by the way I think is where my issue lies. I am seeing ASYNCH_NETWORK_IO as the wait type for the users with issues, and when I tried to get the user to update his code from the server he evidently had lost his network connection so perhaps it is not an Access or SQL issue at all, just a bum wire.

Author

Commented:
While we are at it perhaps someone can help us resolve another mystery whcih needs to be resolved to solve this issue.
When looking at the SQL activity log a number of the pending processes show a HOST that is a computer name that is not on the network and hasn't been for several days. It IS the name of the computer that the Access front end (.mde) was compiled on. I can find no reference in any of the .mde's properties to that computer name. Some clients log in to SQL using the computers name, others don't.

Commented:
I don't think either of us were picking on you KM, at some point there was a tangent and we decided to discuss it for a while.  You admitted from the beginning that you are learning and we can appreciate that.  I think awareness needs to be raised as far as managing SQL server, but I also don't want the SQL experts here relying too heavily on the "RTFM" response that seems to be common place among oracle experts.  That said, "changing your customers mindset" falls to you, the implementor.  Even if you're still learning, make sure you remember to explain to your customers that by switching to sql server they're entering the relm of the enterprise and with that comes responsibilty.

Ok so back to your problem :).  I still think this is a connection issue and your latest report is now also pointing in that direction.  Have you tried upgrading the sql client (I pasted the link in an earlier post)?  If you're using an older, less robust client driver, that could certainly explain this issue.
Top Expert 2012

Commented:
>>I still think this is a connection issue and your latest report is now also pointing in that direction.  <<
To me it is an application problem.  No application should freeze that way.  This indicates to me, an unnecessary use of On Error Resume Next and/or setting the CommandTimeout to 0 or something of that nature.  If there was a connection problem, then it is the responsibility of the front-end app to respond to that.  If (as I believe) the database or Transaction Log is auto-growing then again the app should gracefully fail.  It should never ever freeze. Period.

You can blame the database, the MS or third party DLL's, the computer itself, you name it, but in the end you are responsible to act upon all those possibilities.  You are the face to the user.  If a web app that I have written is slow in China and I do not recognize that possibility, guess who the user is going to blame even if it turns out to be a bad local router?

Let's assume for a minute that it is a bad connection.  The app should recognize that and should report it.  At the very least it should log it, so that the developer can review it at a later date and see that user XYZ had a problem.  Out of all the times this happened, how many times do you think it was reported? 20% and I suspect I am being generous.  How long do you think this has been happening before it was brought to your attention?  Now you see the extent of the problem?

MS Access is a good tool, unfortunately and paradoxically it has received a bad rap because it is easy to use and many who do not have a strict professional background have jumped on the band-waggon.  This has caused all manner of problems for it in many non-trivial apps.  To make matters worse MS is pulling the plug on it (its lack of a 64-bit JET driver has not helped) and many of the better developers have flocked to the newer and more robust .NET languages.  For one that has been in this industry for some years the outcome is quite predictable and should be similar to what happened with VB6, MS FoxPro, dBase and a whole host of other development tools before that.

But I digress.
Top Expert 2012

Commented:
So back to the problem at hand.  Since you cannot cahnge the app, go for the easy problems first.  Have you checked:
1. The SQL Server Error Log.
2. The Windows Event log.
3. The size of the database and Transaction Log.
4. What storage for the database are you using?  Are they fast disks? What RAID are you using?
5. Are the database file(s) on the same drive ast the Transaction Log file(s).
6. Is TempDB on a separate drive from the database and Transaction Log files.
7. Is the database running in Simple or Full Recovery Model.
8. Are you running frequent backups? If in Full Recovery Model are you running frequent Transaction Log backups?  When do you run them?
9. How much memory does the server have?

These are some of the questions that a DBA would ask.
Commented:
Could be an app issue, but connectivity could also be established with external table links so he not even have any ado/ado.net code in the app.  Otherwise I think we'd be seeing more info here in terms of exceptions/errors unless we've got some bad usage of catch(System.Exception){//do nothing}.

But I agree with your list - there's a lot of questions to be answered. KMosher please also include some insight into the connectivity methods to the db and the type of code you're writing.
Top Expert 2012
Commented:
>>Could be an app issue, but connectivity could also be established with external table links <<
I don't think I was very clear on this.  It is always an "app issue", even if it turns out to be ssomething like network connectivity, database corrupt, or a bird's nest shorting out the power-lines.  It is the app's responsibility to account for all of those variables by reporting them and gracefully failing.

>>ac, I thought you'd get a kick out of this:<<
I did.  I will be curious to see the response it gets.  Perhaps none as there are too many "gamers" on this site.

Commented:
Yes, by app, I assumed "custom app", meaning relating to any custom code KM has written.  There could be very little such code and he's relying on the designer tools in access.  If that's the case than yes, the sql server is not the issue.  But if this is a odbc driver issue, then I still think KM is in the right place.
Top Expert 2012

Commented:
Now whatever happened to the author? Did we scare them off?

Author

Commented:
Nooo. Actually I have been writing a response all day, I just keep getting interupted with real work! You guys have been great! I will have a post up soon.

Author

Commented:
Log is fine
Backups done regularly, actually when I checked the last time we had a problem the log size was about 4.5 megs and the available was 500 megs.
There is LOTS of code. Roughly 250,000 lines. This is not an app thrown together using wizards. It has been in use and ongoing dev. since 1998
I do need to look at the logs.
I will cop to using On Error Resume Next but very rarely and usually for a reason like deleting an object that may or  may not be there so I don't want to throw an error if it is aleady gone.

You are right about my app not handling a dropped connection well. There is so much going on between bound forms and code referencing bound fields - a lot of this was written a long time ago for 10 meg nets and slow servers and written for speed I hate getting the same data more than once.. I will admit we do not handle things well when the connection is lost. With Access this isn't such an issue since if you drop the connection you are probably going to corrupt the DB so you had BETTER have a reliable net. We have had our share of those issues as well with customers blaming our app for corruption of Access DBs. It ALWAYS turned out to be a bad net/net card/computer/server. With SQL of course, it is different. The backend stays reliable only the user sees an issue. When you are corrupting data and bringing everyone to a halt (Access) there is a real motivation to bring someone in and find the problem. When it is one user (SQL) it doesn't get the visibility and attention.

What we are going to try is have the user that seems to cause all the problems RD in to the server and work from there and see if the problem goes away.

Of course I am talking to the clients network consultant and he is blaming us. I HATE pointng fingers. You know I really enjoyed this more back in the DOS days. When it was simpler my partner and I did it all. We built the servers (Novell running BTrieve), INCLUDING the hardware. We hired the cable installers, we wrote the software. If a customer had a problem I walked in the door and stayed until it was solved. We had complete responsibility and it was simple enough to do it that way.

Of course the nice thing back then was it was all new enough and custom enough they also paid us by the hour until it was solved.

IF anybody can tell me how to determine from within access that we have lost the network connection I would love to know.

Is there a windows XP log that will tell me this stuff?

Where is the SQL error log that will tell me this stuff?
Top Expert 2012

Commented:
>>Backups done regularly, actually when I checked the last time we had a problem the log size was about 4.5 megs and the available was 500 megs.<<
But are you using Full or Simple Recovery Model?  The following T-SQL command will reveal that:
SELECT DATABASEPROPERTYEX('YourdatabaseName', 'Recovery')

The default is Full and allows point-in-time recovery.  In other words you can restore to a minute before you dropped your table or inadvertently did a DELETE without a WHERE clause.

If it is using Full Recovery Model than you need to be backing up your Transaction Log as well as your database.

But more than anything else you need to find out if there is adequate space allocated for the database.  This is NOT disk space.  You pre-allocate space in the database so that your database does not have to auto-grow.

>>There is LOTS of code. Roughly 250,000 lines. <<
There always is.  That does not mean it is efficiently taking advantage of SQL Server.

>>I will cop to using On Error Resume Next but very rarely and usually for a reason like deleting an object <<
And that is fine, since you cannot use Try Catch error handling that is your only recourse.  What you need to make sure that you are reseting it (On Error GoTo 0 if I recall correctly) immediately after executing the delete and checking for any Error condition (I trust you are doing this).

>>You are right about my app not handling a dropped connection well.<<
And I honestly do not believe that is the problem or MS Access would have reported it, instead of freezing.  But if you were writing code instead of using bound forms you would check in much the same way you are doing when you delete an object in other words you wrap it in On Error Resume Next ... On Error GoTo 0

>>Of course I am talking to the clients network consultant and he is blaming us.<<
Of course he is.  When you build an front-end app you are guilty until proven innocent.  Unfortunately, you have not made it easy to do that.

>>You know I really enjoyed this more back in the DOS days. When it was simpler my partner and I did it all. We built the servers (Novell running BTrieve)<<
We have all done that.  I can still remember the Get Equal command (5 if I recall correctly) and the Btrieve Error code that I feared the most (2 if I am not mistaken)

>>IF anybody can tell me how to determine from within access that we have lost the network connection I would love to know.<<
This is trivial when using code unfortunately you do not appear to be doing that so I do not believe it is possible.

>>Is there a windows XP log that will tell me this stuff?<<
Check out the Windows Event log.

>>Where is the SQL error log that will tell me this stuff?<<
You can see this using SSMS (SQL Server Management Studio) or by using a SQL Command.

Author

Commented:
>>IF anybody can tell me how to determine from within access that we have lost the network connection I would love to know
This is trivial when using code unfortunately you do not appear to be doing that so I do not believe it is possible..<<
TELL ME HOW! in code. I am not above adding a debug switch to our app that will check before attempting to write.

What difference does the recovery method make to this issue? We are not running out of disk space.  Backups are getting done. The DB grows VERY slowly, maybe a 5 or 6 megs a month? I understand allocated space, that is not an issue.

I understand how to use On Error, I am going through the offending code now looking for issues. But I doubt seiously that is where the problem is.

And by the way the ODBC update retry interval is 1 second and the number of retries is 10. This is an Access setting. ALL record locking is no locking (Optimistic) and all transactions are crafted to start/complete or fail without any user input. Error messages are displayed after rolback if they fail

I have attached a copy of a trace file created with SQL Server Profiler. This is EXACTLY what happens when a user updates a record correctly. The hang we are seeing happens when this is executing, or after. As I mentined (I think) generally what happens is the users code will hang during an update but when they restart the data has been saved correctly. There will be a waiting process on the server with an ASYNCH_NETWORK_IO reasoon for wait. The process dtails are alwasys empty.

NOTE I HAVE RENAMED THE EXTENSION. It would not upload with a .trc extension.

Author

Commented:
Can anyone tell me how to view the SQL Servrer logs in SQL Server Management Studio EXPRESS?
The logs are there but evidently the viewer is not.
Right-Click - View server log is not available in express.
Top Expert 2012

Commented:
>>TELL ME HOW! in code. <<
Try and establish a connection to SQL Server using ADO.  If it fails you do not have connection.  It is simple as that.

>>What difference does the recovery method make to this issue?<<
It makes a difference in whether you have to make Transaction Log Backups or not.   See what I stated previously:
"The default is Full and allows point-in-time recovery.  In other words you can restore to a minute before you dropped your table or inadvertently did a DELETE without a WHERE clause."

>>We are not running out of disk space.  <<
Again read what I stated previously:
"But more than anything else you need to find out if there is adequate space allocated for the database.  This is NOT disk space. You pre-allocate space in the database so that your database does not have to auto-grow."  I don't kow how else to describe it to you.
 

Commented:
In the 2008 ver of SSMS logs are under the object explorer -->Management --> Sql Server Logs

"I will cop to using On Error Resume Next but very rarely and usually for a reason like deleting an object " 
as AC has mentioned this should be immediately disabled after just a line or two of code with On Error Resume 0.  Placing On Error Resume Next at the top of a file and never turning it off was a death wish for any programmer working under me.

Please paste a sample of this code, specifically where you establish a connection to the sql server and maybe a bit of your DML after that.  I'm still completely in the dark here of the connection method.
I didn't see your trc file (or any file for that matter)?
Top Expert 2012

Commented:
>>Can anyone tell me how to view the SQL Servrer logs in SQL Server Management Studio EXPRESS?<<
All the SQL Server Logs are kept under the Management collection and under the SQL Server Logs in SSMS.

In order to retrieve it programatically use this undocumented Stored Procedure:
exec xp_readerrorlog
Top Expert 2012

Commented:
>>Please paste a sample of this code, specifically where you establish a connection to the sql server and maybe a bit of your DML after that. <<
That is just the point the are not using any code (let alone using ADO), but rather relying on Access to do it.

Author

Commented:
acperkins:
Thanks for this exec xp_readerrorlog. I will use it.

RE: How to test connection. That is correct, we use linked tables for almost everything. But on our bound forms there is a before update event. I could write code there to test the connection before allowing the update of the bound form data. Add a debug mode switch to the program so the test only runs when debug is turned on?
POLineitemEditTrace.txt

Author

Commented:
I have uploaded the file. See comment above.

Author

Commented:
RE: exec xp_readerrorlog
I was going to ask how to read old logs, but found the answer.
exec xp_enumerrorlogs
to get archives by number and date
then
exec xp_readerrorlog 4
4 being the log number for the date in question.
acperkins and b_levitt, I want to thank you both again for all your help, sticking with me on this.
Commented:
Well, read the customer's logs. There are a LOT of

'SQL Server has encountered 1 occurrence(s) of cachestore flush for the 'SQL Plans' cachestore (part of plan cache) due to some database maintenance or reconfigure operations.'

messages in conjuncion loading loading a tempdb. This particular customer paid someone to write a lot of custome reports that access our date and batch them. I think they might be causing this.

I looked at another clients server, the only time they flush is prior to backup.

From what I read 'cachestore flushs' are bad things. Any opinions?

When I say a lot I am tallking 500 in a day.
Top Expert 2012

Commented:
>>we use linked tables for almost everything<<
That is what I suspected and feared and also explains why you have connections always open, long after you close the application.  Closing the application is not the same as explicitly closing the connections, so they are never closed.  A true client-server app makes a connection, executes a request and closes the connection.  This keeps the number of connections down to a minimum even when using connection pooling and in consequence less memory is consumed.

>>From what I read 'cachestore flushs' are bad things. Any opinions?<<
I don't know, I have never had/seen them.

>>due to some database maintenance or reconfigure operations.<<
But if I were to guess then again, this sound like the auto grow problem, I have been harping about all along.

>>acperkins and b_levitt, I want to thank you both again for all your help, sticking with me on this.<<
Unfortunately, unless I get answers to some of the questions I asked, I cannot help much more.  In addition, to the previously asked questions here are some more:
10.  (Not sure if I have asked this already) Is SQL Server installed as recommended on a standalone server or is it shared with some other app?
11.  Do you have the Auto Close enabled on the database?

Commented:
If these are linked tables and this is only happening from select machines I will again strongly suggest that the client drivers are updated.  If this is only happening to 4 or 5 users out of 30, I don't know why we're still concentrating so hard on the sql server instead of the connectivity or access itself.

Author

Commented:
acperkins and b_levitt I want to thank you both for all your help. While we haven't solved my probemI have learned a lot and it has given me some directions to look in and taught me some new tools to work with. You both deserve points if they are important to you, so why don't the two of you tell me how I should divide them up?
Top Expert 2012

Commented:
A straight split is fine with me.
Top Expert 2012

Commented:
However if you ever do find the cause of the problem, please post it here, if anything to prove that b_levitt was wrong and I was right. :)

Author

Commented:
Will do ac. We are getting closer. It my actually be bad code. The customer kept reporting it as random, but they have now pinned it down (suppposedly) to a very specific set of circumstances. I have yet to cehck them out but I will let you know. May be a few days, I tore some ligaments snow boarding and about to enter a round of Dr. Appointments and MRIs.

Sigh...

b_levitt: is a straight split OK with you?

Commented:
Lol, yes of course that's fine.  Just update those clients, it'll take 5 minutes :P.  Then we can talk about whos "wrong" :).

Author

Commented:
Well, another piece of the puzzle, or at least something to be changed. Thanks to you guys. We inherited a number of queries from an accounting package we bought back in 1998. Most extensively modified. BUT, guess what? All of those original queries have theri ODBC timeouts set to 0! I am in the process of changing them all.

Wouldn't have thoght to look if you guys hadn't mentioned it.

Author

Commented:
Moderator:

I don't know how we do this. I don't really want to close this as the issue isn't resolved but these guys have been a great help and I wanted to award the points. They asked me to keep them posted. So i would like to keep them abreast as we work through the problem.

Author

Commented:
Well,Guys a major piece of the puzzle. I would call it a bug in Access 2002/2003. This is huge, I am amzed it hasn't come up before.

IF you are using SQL Server, not Access
AND IF you have a QUEREY (NOT a tablle, QUERY) with either an ntext for nvarchar(MAX) field.
AND if that query is bound to a COMBO BOX as the ROWSOURCE
AND IF the table behind the query contains more records than specified in the option "Don't disply lists where more than this number of records read:" (Default is 1000)"
THEN as soon as you use the COMBO BOX IT LOCKS THE ENTIRE TABLE BEHIND THE QUERY!!!

The query on the Combo Box is held open on the server until the last record is requested. IT shows up as a process BUT WITH NO LOCKS. ONce the last record is read in to the cmbo box the process is closed anf the table unlocked.

Note that Auto Expnad is turned on. I did not test htis with AUto Expand turned off.

This means if yo have a table with 10000 records spread abhanumerically as soon as you start typing the entire table is locked. if the item I want starts with an a or a b I will never get to the end of the recordset and the table will be locked indefinitely/. If I type an x or a z it jumps to the end of the table, alll the records have been read, and the table lock is released.

THE OPEN QUERY IN THE PROCESSES LIST DOES NOT DISPLAY A LOCK. IT just shows the aforementioned ASYNCH_NETWORK_IO as the status. IF you try to edit the a record in the offending table then it DOES show up as blocked and blocking.

The solution to the problem was to take the ntext (nvarchar(MAX)) field out of the combo box. Probllem solved.

It only happens with a query, and the query itself was set to snapshot, no locks. It made no difference.
Top Expert 2012

Commented:
I am glad you found the problem and if it is any consolation you are not the only one who has come across that problem on this site:
http://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL-Server-2005/Q_23811810.html

I know I have ranted about this before, but here it goes again:
This type of problem is very typical when moving from MS Access to MS SQL Server.  

1. MS Access developers believe that their code should work exactly the same on MS SQL Server.  Unfortunately, this cannot be further from the truth.  For example, if you follow good practices when developing client-server apps, you would never ever build an interface that includes a drop down box that includes more than hundred entries.  It simply does not make sense.  No user in their right mind is going to browse through hundreds, let alone thousands of rows.

2. The "upgrade" wizard does a lousy job (and why it is one of the reasons I would never consider using it) at converting MS Access data types to MS SQL Server data types .  Partly because there is no direct conversion for many of them and partly because it has never been updated and still relies on deprecated data types such as (n)text.  It also has to deal with the lowest common denominator, in the sense that it converts character data types to unicode data types.  So if you have no use for double byte languages such as Japanese, Chinese, Hindi, etc, then guess what? You have just doubled the size of your data sets, at least for the character columns.  So rather than take the time to "hand" design the tables, people end up with bloated resultsets.

3. But probably my biggest beef:  MS Access developers do not realize that binding to a real DBMS such as Oracle or MS SQL Server, comes at a tremendous cost.  When retrieving data you should always use a Forward-Only, Read-Only cursor.  When modifying data, the SQL statements for INSERT, UPDATE and DELETE should be used.  And in all cases the connection should be opened immediately before and closed immedaitely after execution.

So yes, you may blame MS Access or MS SQL Server for the problem, but the truth of the matter (and I trust you will not take it the wrong way) you are responsible for the results of your app.

I should also caution you into not thinking that you out of the woods.  Be proactive and assume that it may only be a matter of time before something else shows up that you overlooked.

Good luck.

Author

Commented:
acperkins, I'm with ya, no offense taken.

I do not "blame" Access. I have found enough anomolies with Access over the years to know stuff like this is going to happen. The real reality is that the higer the level of the language that you use the more prone you are to things like this. That was why programming in Assembly was always so rewarding - no one to blame but the hardware guys. But that is why we chose Access. My vote was for VB back in 1997 but it was my partners decision and he like Access and I'm glad we did it. We never could have done what we have done in any other language in the time we have done it.

Programming is ALWAYS a tradeoff. features + ergonomics + reliability = Time. So which of these three things would you rather put your time into? For us it is features and ergonomics. That is what our customers want. We can save them so much time with a new feature that they need that the minor bumps in reliability like this one they will (all right I won't say gladly) but bgerudghinly put up with. Time is money and more than once I have looked a customer in the eye and said "Are you williing to double your annual mainainence so we can hire another programmer to fix this stuff? The answer is NO. So they live with it and understand the tradeoffs.

We are fortunate, we do business with small business owners - manufacturers who build stuff to order. They for the most part understand EXACTLY wha the issues are. It is interesting, I hadn't actually thought about htis before, but the real whiners are acttually some of the EMPLOYEES of our larger customers. These are people who only think in terms of their own little world and want everything they do to run smoothly. As I mentioned, the owners make exactly the same kind of quality/cost decisions as we do every day. We can look them in the eye and say "what do you want? You want this bug fixed or you want that new feature that is going to save you 35 hours a week?"

As to the data types. I am very aware of what happpens, and if and when we ever get enough SQL customers i will clean up the data types but the reality is our largest customer has less than forty users, less than four gigs of data, and is running just fine on SQL express. Truth be told I would LOVE to see how our stuff runs with more than processor and 2 gigs of ram but no one has fund a reason to spend the money. So, again, spend the time for a nominal speed and effeciency boost or more features?

Plus, If I use the wizard I can upsize the customer (if his data is clean) in about four hours. They like that, 'cause it doesn't cost them much money. Yes, what I really want to do is clean up the schema, be able to put it empty on the customers server and then migrate his data. But Learning to do that is probably a week that can be more profitably spent on other things.

Like what is probably my next !@#$ question for the experts.

How to properly install and update our application on Win 7 with %$#@& !#%$ &&!#@$ UAC. The #$*& redirection stuff is killing us. Our front end is an Access app, which for us is esentially code, but Microsoft considers it program Data.

But that's anothr story....
Thanks again
Kim

Author

Commented:
acperkins, I just looked at the link you referenced, and it doesn't cover my issue at all, nor does ayone at that link seem to have a real understanding of what is going on.

In the first place there is no mention of the locking issue whcih I am sure is because he was not having that problem as neither of the referenced fields was likely to be a memo.ntext/nvarchar(MAX). It is the locking issue I care about.

As I mention in my previous post The resources used by the number of users we have is evidently so miniscule I really don't care if each user is holding two or three processes open indefinitely.

The reality is if any of our customers ever gets big enough for this to be an issue then they can afford to pay us to solve it.

Secondly, I believe it is simply the way Access deals with things (in this case badly) default # records that a combo box will bring in is 1000 and evidently when it hits 1000 access just stops accepting data? I don' really understand how access interfaces wiht SQL on that but it is easy to verify what is going on. Try this simple test. Bind an Access combobox rowsource to a table with 10 or twnty thousand records.Now drpdown the list AND PULL THE SLIDER TO THE BOTTOM. When you let go you will notice it immediately jumps back up. The slider does not stay at the bottom. What happens is this: Access brings in the first 1000 records, when you pull the slider to the bottom it jumps to the last record, but immeiately gets another 1000 so the slider is no longer at the bottom s o it pops up a little ways. This continues until all records are pulled in to memory.

Until the end of the recordset is reached the process is held open with the ASYNCH_NETWORK_IO as the reason,

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial