• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 319
  • Last Modified:

Why is form fast for last user to open it, but slow for everyone else?

This is the continuation of another question that grew in scope:

[T]he following sequence occurred:

- User A opened a particular form and held the Page Down key to traverse 100+ records with fast performance.

- User B opened the same form, and did the same thing, with similarly fast performance; but when User A tried again (without closing and reopening the form), performance was painfully slow for them, while performance remained fast for User B.

- User A closed the form, reopened it, and got fast performance again; but now performance became painfully slow for User B.

This went on and on with additional users, as if the most-recent user could "steal" performance from the rest, by simply closing/reopening the form.

(Original question provides additional specifics.)

An expert suggested that record locking was responsible, and that many factors (disk, OS, drivers) could play into it.  I need to get *some* handle on this.  Can any experts offer insights for narrowing it down?
0
billpars
Asked:
billpars
  • 10
  • 10
  • 5
  • +1
1 Solution
 
Nick67Commented:
I guess the first questions to ask are:
Is this application split into a front-end and back-end?
The answer to that for any application run by more than one user simultaneously should be YES.
Do users share the front-end of the application?
While it is possible to do this, it is also possible to drive a car with one's feet -- possible but not advisable.
The answer should be NO, each user has a local hard-drive copy of the front-end for their exclusive use.

I suspect you have a single file on a server used by multiple people simultaneously.
Is that correct?
0
 
BitsqueezerCommented:
Hi,

Access uses lazy loading, that means, it tries to open the first page very fast and then load the rest in the background slowly. That can lead to performance problems for other users working with the same table(s).

If you want to avoid that, simply use a "Me.Recordset.MoveLast" in Form_Load and then "..MoveFirst". You can switch off painting in the meantime with "Application.Echo False" and "...True" after finishing. This will successfully break the lazy loading behaviour, but also the display of the form seems to be more slow as Access will show the form only after completely loading the list. If you do not have millions of records then it is normally only a delay of few seconds but you have not the lock problem.

The same is true for any control which can load a list itself. Here you must simply copy the "ListCount" property to a dummy variable to immediately load the list instead waiting for lazy loading (has the additional advantage that you can scroll completely through the list whereas the standard behaviour will not scroll until the list is completely loaded in the background).

Other performance problems are of course heavy use of Conditional Formatting which can slow down scrolling through large lists and especially, if you additional use Access filters (I recommend to better use a WHERE clause in the underlying RowSource instead of the slow and buggy Access filters).

Cheers,

Christian
0
 
billparsAuthor Commented:
Nick,

Thanks for your feedback.

Per the original question:
The environment is 64-bit Windows 7 hosting 32-bit Access 2010 and an unsplit, legacy 2002/2003-format MDB with ULS.

Rest assured that I am *painfully* aware of its inadvisable architecture, and have yelled from the rooftops since inheriting it.  IT is planning a split, with each user to have a local copy of the front-end by EOY.  Nonetheless, I am stuck supporting facts on the ground until then.  (Lucky me.)  The question remains: Is this a locking problem, and can anyone offer any insights for narrowing it down?

Bill
0
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

 
billparsAuthor Commented:
Christian,

Danke schön.

Access uses lazy loading, that means, it tries to open the first page very fast and then load the rest in the background slowly. That can lead to performance problems for other users working with the same table(s).

Would this explain the most-recent user to open the form consistently "stealing" performance from the rest?  What mechanics of "lazy loading" would explain this?

If you do not have millions of records then it is normally only a delay of few seconds but you have not the lock problem.

Roughly, the mainform has 1,000 records, and half a dozen subforms.  Some subforms display 3,000 records for each mainform record.  (The underlying table for one subform has 1.6 million records.)

The same is true for any control which can load a list itself.

Roughly, a dozen comboboxes, each with half a dozen (static) items.

Other performance problems are of course heavy use of Conditional Formatting which can slow down scrolling through large lists

No conditional formatting.

I recommend to better use a WHERE clause in the underlying RowSource instead of the slow and buggy Access filters

No Access filters.

Do these circumstances suggest to you locking issues, caching issues, local resource issues, network issues, or something else?

Bill
0
 
Nick67Commented:
Bill
IT is planning a split, with each user to have a local copy of the front-end by EOY.
The split is a 10 minute job that you can do, and distribution is a simple as
'please copy c:\temp\thenewfile.mdb and put it in a folder called c:\theapp'
'run it from a shortcut from now on.'

Access clearly has some soft limits
DoCmd.FindRecord SomeValue, , , , , acCurrent, True
will work great -- until it doesn't.
Somewhere between 1000 and 10000 records.
You may have hit one of these undocumented soft limits.
0
 
billparsAuthor Commented:
Nick,

You are correct that splitting a database takes only 10 minutes.

Splitting a database for an organization with a large, entrenched IT bureaucracy, however, takes significantly longer (read: "proper channels", "change orders", "open tickets", "technology steering committees", "approval processes", etc).  Please understand that I enjoy *far* less autonomy in my environment than you do in yours.

Splitting a database also takes longer if the original code base (and query objects) include *pervasive* domain-aggregate functions that must be replaced and tested with alternatives that do not kill performance in a linked-table environment.  There are 245 query definitions; 85 forms with 40,000 lines of code; 20 reports with 1,500 lines of code; and 40 standard/class modules with 22,000 lines of code.

Nonetheless, I appreciate your insight.  We are on the same page, but laboring under very different circumstances.

Bill

PS: I gather your prognosis differs from Jim, and you do *not* view this as a locking issue?
0
 
BitsqueezerCommented:
Hi,

OK...maybe you should explain the environment a little bit more. If I understand you right, you have a single Access database file on a network for x users - and with 1.6 million records....?

I thought of course that it is the standard scenario with frontend on a local computer and backend on a remote computer, my fault.

If that's the case you have of course (additionally to lazy loading) other performance problems as the backend is no database server and therefore a passive object which doesn't help you in performance. All and everything must be done by the frontend and as you additionally work with the same file (frontend AND backend in remote in one file) there can be additional issues depending also on your network etc.
You also have the high risk to destroy the database if more than one user works with the same file as frontend although Access allows to do that. It's OK for a small group, let's say 5 people, but regarding the really high number of records you should strongly consider to move to a database server and also split the frontend and backend. OK, 10 minutes is a little bit small to do that...:-)
But if you plan that exactly and move the data from the current database file to a server parallel to working and change the frontend from internal tables to linked tables (which indeed do not need much time or changes in code or forms) you can build that beside the working file and then you can test that without impact to the users and in the end get the latest data when moving to the new solution.

Cheers,

Christian
0
 
PatHartmanCommented:
I would recommend splitting the database immediately.  If performance becomes worse, then wait for resources to do some optimization but you may find that performance isn't much worse than it currently is.  It won't be better though.

Binding forms to tables or to queries without selection criteria is poor practice.  You might want to experiment with some of the heavier forms by adding filtering options to the form header and using them in the recordsource query's select clause.  You will be moving far less data around which is good for everyone and you will be on your way to an app that can be converted to SQL server.
0
 
Nick67Commented:
Nonetheless, I appreciate your insight.  We are on the same page, but laboring under very different circumstances.
PS: I gather your prognosis differs from Jim, and you do *not* view this as a locking issue?
*pervasive* domain-aggregate functions


DLookup is of the devil and must die.

We -- you and everyone posting -- know the end fix.  A proper split, back-end preferably on SQL Server, and a distributed front-end.  I encountered the same issues, and no fix short of that really took care of matters.  In the troubleshooting volume of  (easy,cheap,effective) moving out toward (hard, expensive, unlikely) that fix sits closer to the origin than anything else.  I didn't find anything else effective when I encountered the kinds of issues you now face.  I don't know if you will either, but you have no choice now but to try.

Jim has offered you a straw to grasp at.  I don't know if it will be anything other than that, but it is worth a shot.

There are 245 query definitions; 85 forms with 40,000 lines of code; 20 reports with 1,500 lines of code; and 40 standard/class modules with 22,000 lines of code.


112K lines of code
160 tables
498 queries
170 forms
158 reports
50 modules

I feel your pain.
But even in its current state, the domain aggregates should be hunted down and killed.  And they aren't that bad to do -- why MS hasn't made DLookup( "SomeField","SomeTable", "SomeCriteria") perform on par with
Dim rs as recordset
Set rs = CurrentDb.OpenRecordset("Select Top 1 SomeField from SomeTable where SomeValidWhereCondition",dbOpenDynaset, dbSeeChanges) is a mystery

Note the dbSeeChanges.
It changes nothing for you at present, but will be required for every DAO recordset statement when you split the file.  You can pull the pin on that now.

If you haven't already turfed DoCmd.FindRecord in favour of code like
Dim rst As DAO.Recordset
Dim strCriteria As String
strCriteria = "ThePK =" & Me.MyPKField
Set rst = Forms(stDocName).RecordsetClone
rst.MoveLast
rst.FindLast strCriteria
If rst.Nomatch Then
    MsgBox "No such record found.", vbInformation
Else
    Forms(stDocName).Bookmark = rst.Bookmark
End If
Set rst = Nothing

Open in new window

start now as it performs an order of magnitude better than FindRecord once you pass a mysterious threshold of records involved, somewhere between 1K and 10K.

If you haven't hunted down all the instances of Select * and replace them with the actual fieldnames if you need less than the whole table start doing that too.

If you have any forms based on tables and not queries, get that done.

You can begin to look at comboxes and listboxes.  These are heavyweight controls.  You can rig them so the have no RowSource until AFTER their first GotFocus event.

You can look at putting subforms that may or may not get used on tab controls and not getting the subform control's ControlSource until the tab gets the focus.

All good things, and they may buy you some breathing room.
0
 
billparsAuthor Commented:
Christian,

A 64-bit Windows 7 "server" shares an unsplit, legacy 2002/2003-format MDB (with ULS) over a 1 GB LAN.  The filesize is ~400 MB.  Four users log on/off throughout the workday, from 64-bit Windows 7 systems running 32-bit Access 2010.  The database stores historical data.  Once per year, a user uploads current-year records (~20 MB).  80% of workflow is R/O.  Users review data in forms on screen, export to Excel, and print reports.  The other 20% of workflow is updating records, but never the same ones, and never at the same time, so write locks are not an issue.

What is your take on the suggestion from Jim that read locks explain the behavior described the top?  Is there a more-likely cause?

Bill
0
 
billparsAuthor Commented:
Pat,

Thanks for your insight.

I am a terrible communicator.  I tried to make a critical point upfront: Due to circumstances beyond my control, immediately splitting the database is *not* an option.  Obviously, I failed to convey that.  By EOY, I *will* be allowed to split the database.  In the meantime, I am trying to answer the question: Why is a given form fast for the last user to open it, but slow for everyone else?  (See description at top.)

This behavior seems (in my experience) quite unique.  One expert suggested read locks.  Can you concur, or do you have a different take?  What *specifically* might cause the behavior I described?  What might I modify to test that specific cause?  How would splitting the database mitigate read locks (or any other specific cause)?

Bill
0
 
billparsAuthor Commented:
Nick,

Thanks for the very practical tips to prepare the database for splitting.  I always designed Access databases as split from the start, so maintaining an unsplit one is terra incognita, at least for me.

Still, performance in this database generally is excellent.  Only under the singular circumstance I describe at the top does it wane.  Why would users "steal" performance from each other, and only on a given form, by simply closing/reopening it?

This *must* be telling us something, if only we could recognize it.  Record locking?  (If so, why would splitting the database change record locking?)  If not record locking, then what?

Bill
0
 
PatHartmanCommented:
Bill,
I know you are "constrained" but splitting the database involves pushing a button and waiting for it to happen.  You'll know immediately if it solves the problem.  It's not like you have a lot of users.  Then, if it solves the problem, push the issue.  If not, wait until year end.  Notice that I didn't say to make any other changes.  Just splitting could resolve the conflict you seem to be experiencing.
0
 
Nick67Commented:
This *must* be telling us something, if only we could recognize it.
<grin> It's telling us to split the database, but we know that already </grin>

Clearly, Access has to have some priorities in a multi-user situation, and we are seeing a situation where the last connection made is being given order-of-magnitude priority over previous connections.  In an unsplit situation, I have no idea what the mechanics are that Access uses under the hood to keep track of which fileuser is requesting what operation and how to output the request to the various consoles.
This is worth a careful read
https://support.office.com/en-za/article/Ways-to-share-an-Access-database-2c24eb08-bee1-453e-be8e-455f847c5c74
It may not solve your issue, but is definitely a bunch of things to make sure you have in place.
8.3 filenames at the root of mapped drive, or maximum 1 folder down is not intuitive in 2015, but I have seen that matter greatly at times
0
 
billparsAuthor Commented:
Pat,

Thanks for the great suggestion.

Splitting the database for testing purposes (vs. in production) is an excellent idea -- and one that I actually tried.  Unfortunately, pervasive domain-aggregate functions (et al.) created even bigger problems, making it impossible to test the behavior I described at the top.

Bill
0
 
Nick67Commented:
Though I've never needed to do so, you could spilt it piecemeal in a testing environment.
After all, a split is:
Creating a new file
Exporting tables with definition and data
Rename old tables
Creating linked tables
Re-creating the relationships

You could kick the problem table into a back-end, create a linked table and test it.
And perhaps more tables need to go to the 'test back-end'
If it goes swimmingly, you might be permitted to put that into production
0
 
PatHartmanCommented:
OK, at least you tried.

We can't even attempt to emulate your problem because experts don't make monolithic applications and when they encounter one, they split it ASAP.  So, we're pretty much guessing as to what Access' behavior might be in such a huge app.

You could start by removing the dlookup()s since those are the easiest.  You can almost always replace them by including the lookup table in your RecordSource query.  If you don't need updateable recordsets, the aggregates can be replaced by creating totals queries and joining to them.  The key is to use left joins to handle sparse relationships.
0
 
BitsqueezerCommented:
Hi,

did you simply test the method I suggested above to use MoveLast/First and/or ListCount? I already said that the lazy loading behaviour can lock records for a longer time than needed. You can maybe also try to change the recordset type in the form's properties to a not updatable recordset if you don't need to change but only view the data. Also "Load Defaults" should be set to "No" in this case. It can (but don't need to) also help to not use the Link fields of the subform container which uses the Access filters but instead use the Form_Current event of the main form and set the RecordSource of the subform using the ID of the current record of the main form with a WHERE - in this case the subform should work a little bit faster as not the whole table need to be loaded to filter it.

Cheers,

Christian
0
 
billparsAuthor Commented:
Christian,

MoveLast/First have no effect.   Changing the recordset to not updatable has no effect.  Removing subforms altogether has no effect.

Something else is going on here.  As Nick said: "[T]he last connection made is being given order-of-magnitude priority over previous connections."  This, somehow, is key.

Bill
0
 
Nick67Commented:
I've looked at the previous question, and none of those locking settings are different than my own.
But we are in uncharted territory
80% of workflow is R/O.  Users review data in forms on screen, export to Excel, and print reports.  The other 20% of workflow is updating records, but never the same ones, and never at the same time, so write locks are not an issue.


For fun as a test, what if all the form/subform objects on the problem form were set to allow no edits, no deletions, no additions?
Would the problem remain?
If it did, then that would pretty much rule out record-locking in my mind.
I might research  and give MaxBuffers and Threads a look
https://support.microsoft.com/en-us/kb/252444
https://msdn.microsoft.com/en-us/library/aa188211(office.10).aspx#odc_4009c15_topic3
0
 
billparsAuthor Commented:
Nick,

Thanks for your persistence.

The mainform controls are locked (and AllowAdditions, AllowDeletions, and AllowEdits are False on subforms) unless the user clicks a mainform button to allow changes to the current record.

To which Jim responded,"[T]he database system still needs to know someone is touching that record, so even though your not doing anything, it still gets a read lock."

That seems at odds with your assertion, "[T]hat would pretty much rule out record-locking…"

Am I misunderstanding one (or both) of you?

Neither the database size nor code changed since last November, yet this behavior suddenly appeared last month.  Something fundamental recently changed.  It seems more plausible to me that the environment changed.  NIC settings? Drivers?  Topography?  Switching?

Perhaps Jim is on the right track: "There are numerous layers here too; disk, server OS, network, and your station."  IT denies that anything changed, but that is patently false, since patches are constantly (and ever-increasingly) pushed from Microsoft and AVS vendors.

I suppose MaxBuffers and other Jet settings are worth a look.

I feel like I am groping for a needle in a haystack.  In the dark.  Wearing oven mitts.
0
 
Nick67Commented:
Jim is senior in experience to me, but your settings match mine and I don't see why Access would request locks on something read-only.  But it might.  As for changes, you've not added/changed anything since November?  I ran into this as the number of records grew.
0
 
billparsAuthor Commented:
Nick,

Obviously, people continued to use the database since November (in the manner I already described), but nothing fundamental changed from the user POV.  Just as obviously, something fundamental *did* change from the Access POV.

My guess is the environment (e.g. patching, drivers, LAN configuration), which caused Access to behave differently.  Given the infinite combination of factors over several months, it probably is impossible to determine the exact environmental change(s); however, I had hoped to understand the Access reaction, and to "fine tune" it away.

None of the experts responding (so far, at least) seem to have seen this particular situation before (i.e. the last user to open a form being given order-of-magnitude priority over users who previously opened it).  Unless any other experts want to weigh in over the next day or so, I will award you points for persistence, and slog through MaxBuffers and other Jet settings offline.

Bill
0
 
Nick67Commented:
@Bill
seem to have seen this particular situation before
I've seen it.
Like almost every other self-taught Access guy, I started with a monolithic file that everyone shared.
You get your first corruption scare and you split to FE/BE.
But everyone shares the front end, and when you want to push out changes, you get on the phone, get everyone out of the app, update the front-end, and carry on.  And that goes on for a good long time, until...

You encounter the problem you now have.

And then you work out how you are going to distribute a front-end to each user, because that fixes it.

I've seen your problem.
The ultimate cure was splitting and distributing a front end to each user.
The other user can't 'steal' performance when the file only has one user.
I ran into the abysmal performance of DoCmd.FindRecord later, and tried moving the back-end to SQL Server to fix that.
Ultimately, we all have to abandon DoCmd.FindRecord because it just doesn't perform for any large recordset, and having gone to the trouble of moving the back-end to SQL Server I wasn't going back :)

If you never work with a shared front-end, you'll never encounter this problem -- which is why you aren't seeing a lot of information about it, because very few people have ever left a shared front-end in production long enough to encounter it.
it probably is impossible to determine the exact environmental change(s)
I ran into with Access 2003 on SBS 2008 with XP clients at least six years back -- that's when my FE distribution scripts date from , is 2009.  I think you just pass an undocumented soft threshold of some kind.

You could try export all your objects into a new file.  If it is accumulating cruft in system tables that may provide some temporary relief -- but really, you guess is a s good as ours.

Nick67
0
 
PatHartmanCommented:
I'm not 100% convinced that this problem is caused by being unsplit.  I've experienced two episodes of "slowness" in the past few months and no one can explain them.  My app is split and always has been.  Back in February we came in one Monday and if more than one user logged into the app, it slowed to a point where no one could do anything.  I reverted to a version from the previous week and nothing changed.  I tried everything I could think of and finally moved the BE to a different server.  The speed improved dramatically to the point where it was better than the old server on its best days but that only lasted a day.  By the next day, we were back to "normal" speed.  So I started planning in the background on converting to SQL Server.  The app had been built originally to run on SQL Server but we never had a need to do the conversion.  This "episode" convinced me it would be necessary at some point this year.  Things moved along fine on the new server until last week where the exact same problem occurred.  Everything slowed to a screeching halt.  I again reverted to a prior version with no impact so the problem doesn't seem to be related to the application although I can't say that Access itself isn't to blame.  In any case, no one in my company is capable of or interested in solving the slowness.  So, this weekend we are converting to SQL Server.
0
 
Nick67Commented:
it slowed to a point where no one could do anything.
That's a different monster, though.
@Bill's is pretty memorable.
The user calls you and says that the performance of a particular form is crap.
And you open it as the dev and it works fine.
And you keep getting those calls
So you go to the desk of the caller and investigate.
And you open and close the form and it works fine.
You go back to your own desk to discover that it is now performing like crap on your own machine.
And you have a WTF moment.

I'm not 100% convinced that this problem is caused by being unsplit.
I won't jump up and down about cause.
I am 100% certain that a distributed front-end with a SQL Server back-end won't have the problem.

The back-end needs to be 8.3 filename on a mapped drive letter no more than one folder down from the drive letter, and the folder preferably with 8 characters or less in the folder and share names.

THAT I have encountered, even today still, and is one of the first things to try in the case of universal poor performance.
YMMV, of course.
0
 
PatHartmanCommented:
Except that 8.3 no longer works now that we have .accdb
0
 
Nick67Commented:
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

Featured Post

Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

  • 10
  • 10
  • 5
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now