• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 492
  • Last Modified:

High Concurrent User Count - Best Practices?

I have a database that maintains about 3500 - 5000 active records, with basic maintenance (decompile/compact/repair & archive) run daily.  It has 30-40 concurrent users.  All users have to update records, though none delete or add records.  The users do not ever overlap, so they do not access a record that is in use by someone else, but we're at the point where the BE gets corrupted a few times a day now.

There is a main data table in the back end on the server, with front end MDE's distributed to the agents.  There is also an import database that runs VBA code to update the data table hourly with new records from an IBM-3278 terminal interface via screen scraping.

What I'm looking for are a list of best practices (other then not exceeding 8 users), tips or tricks that will minimize corruptions.  
0
sweekes283
Asked:
sweekes283
  • 5
  • 3
  • 2
  • +1
1 Solution
 
Scott McDaniel (Microsoft Access MVP - EE MVE )Infotrakker SoftwareCommented:
First: High Concurrency and Access are two terms that typically you don't hear together. Don't get me wrong, I'm a big advocate of Access and one of the first to defend it but given your specifications and working conditions, then it might be time to consider upgrading.

What type of connections are you using? Are these typical LAN connections, or do you have any remote users (i.e. WAN, VPN etc)?

Regardless: I'd say that with a 30 - 40 concurrent usercount you're reaching the upper limits of Access and should consider moving to a more robust data store. SQL Server, MySQL etc would be in order, and in most cases you can simply link the tables and be done with it.

Check this link:
http://support.microsoft.com/kb/300216/
0
 
sweekes283Author Commented:
I'm well aware that we're beyond the sane limit of concurrent users in Access, which is why I'm only after tips for minimizing the impact.  A SQL server is not an option in this situation, else I would have already moved it to our server.
0
 
sweekes283Author Commented:
Oh - and to your question, all the users are on the LAN and connect through an MDE on their PC.
0
Microsoft Certification Exam 74-409

Veeam® is happy to provide the Microsoft community with a study guide prepared by MVP and MCT, Orin Thomas. This guide will take you through each of the exam objectives, helping you to prepare for and pass the examination.

 
Scott McDaniel (Microsoft Access MVP - EE MVE )Infotrakker SoftwareCommented:
Okay ...

Daily corruption would be a major source of concern to me. Decompile should never be necessary for data-only applications (i.e. your Backend). You might try importing the data into a new, blank database occasionally however.

Good luck ... here's a link to a couple of highly regarded data recovery services, you'll need them sooner than you think if you're corrupting daily:

www.everythingaccess.com
www.vb123.com

And archiving a corrupt database is as bad (if not worse) than not archiving at all. Have you tested some of your recent backups to insure they're valid?
0
 
Arthur_WoodCommented:
"A SQL server is not an option in this situation"  why not?  SQL Server Express Edition is FREE - you can download it directly from Microsoft, at absolutely no cost, whatsoever.
 
check out:
http://www.microsoft.com/express/sql/register/
 
SQL Server Express is full SQL Server, but limited to a Max database size of 4 GB.
 
AW
0
 
sweekes283Author Commented:
The archive process moves records from the main database to a secondary access database.  Both are backed up daily with weekly tapes being retained indefinitely, and ultimately this is just a workflow, the actual data is retained in the mainframe we're scraping data from.  Even if we lost all the data, we could simply wipe it and pull it from the mainframe again.

And again - A SQL server is not an option in this situation, else I would have already moved it to our server.
0
 
Leigh PurvisDatabase DeveloperCommented:
I have to admit - there's something strangely appealing about your situation.  In an almost massochistic way - the thought of the absolute impossibility of moving to a server RDBMS but the requirement to support 30-40 users in a reliable fashion with a Jet BE.  
It would be fun assuming you were given the leeway to get it wrong a few times (if folks are going to lay down conditions then they need to accept consequences while the constrained perform their tests :-)
Fundamantally - it should be doable.  But almost impossible to remove the chances of corruption.
All you can do is minimise the likelihood of it.
Of everything you described - the one thing that stood out to me was:
"an import database that runs VBA code to update the data table hourly with new records "
It only appends records - or it performs updates too, just as the users are doing?
30-40 concurrent users and a mass update or rows really is pushing things.

Obviously - how sound is your structure.  To support more users - you need that much better and efficient a data structure.
Small tables - large numbers of records.  (3500 - 5000 ain't a whole lot really).
Good and appropriate index use? (Lots of indexes will harm your update time and therefore concurrency.  Reading shouldn't be corrupting your database).
Are you implementing row level locking?  Table could yield better results ironically.
Have you considered padding your records to fill data pages to simulate row level locking?
Going unbound could help with that (Scott will be able to help you all you need in discussing that if memory serves he's a big fan of unbound Access apps).
WIthout seeing the application it's hard to offer definitive advice.
As I say - with it in hand on the network in use - it could be interesting to see.
Remotely it's hard to offer any nugget of inspiration though if that's what you're after.
Consider this old similar situation of Steve's a while back going from here http://www.experts-exchange.com/Microsoft/Development/MS_Access/Q_21755013.html through http://www.experts-exchange.com/Microsoft/Development/MS_Access/Q_22393835.html to http://www.experts-exchange.com/Microsoft/Development/MS_Access/Q_22433474.html
0
 
Leigh PurvisDatabase DeveloperCommented:
lol for some reason I see I typed "Table" - but of course I meant "Page".
"Are you implementing row level locking?  Page could yield better results ironically"
I suppose if you wanted to lock tables instead you should get far less corruption ;-)
(Of course - with some less than happy users having to wait :-D)
0
 
sweekes283Author Commented:
I wouldn't consider it 'fun'...  It started out as a hacked together 3 day pilot, and has exploded over the last 3 months.  Though thankfully they accept that errors are going to occur, I'm just trying to minimize it.

The import adds records and updates a datetime in existing records.  The users close records (boolean, datetime and string) or enter followup info (datetime and string).  The import doesn't touch anything the users touch.

Would splitting the two tables be beneficial or just slow things down further?

I hadn't thought to look into indexes, and I have no clue what locking is set to (presumably Access default).  

As for "Have you considered padding your records to fill data pages to simulate row level locking?"  I'm afraid I have no idea what that means.

I'll dive into the indexes and go through those links, and I'll toss up a new question for the unbound if I decide to pursue that.  Thanks!
0
 
sweekes283Author Commented:
I have a quick question I posted, if you make your way back to the post, but thanks for the help!  
0
 
Leigh PurvisDatabase DeveloperCommented:
Well - one man's "fun" is another man's "hell" :-)
Locking is probably Page anyway.
You could try to force Row (but it would potentially not help and hasn't exactly been the great fix of issues it was likely intended to be).
Look for indexes - unecessary amounts of them in particular.  (Any relationships in your database?  Duplicate indexes should be ignored anyway really - but the decision making process by Jet could still slow things down)
Splitting the tables - do you mean into separate MDBs?  I don't see too much value in that.  The concurrency problems shouldn't be file but at the Page level.  (I imagine most users are accessing more than one table simultaneously anyway?)
That the mass import doesn't hit the same rows as the users doesn't mean that the db isn't attempting to use the same Pages.
New records get written to the end of the file (last and new pages) and if your user's records were also on that page - or just Index pages - then there would be conflict right there.
As for padding - it's a pretty simple concept that requires a bit of thought and code to implement.
If your records are bigger than half of a data page - then Jet can only possibly fit one record per page.  It's effectively page level locking.  It's just a question of padding our redundant fields in your table with data to accomplish this - based on the amount of data entered in the real data fields of your table.  (You're aiming at a 4K page - so over 2K is required.... ).
As I recall - Jim Dettman has offered some examples in the past of code to calculate for you.
To be fair - it's a fairly extreme method.  But - well, extreme situations.
See how you go with unbound if you decide to go that way.  FWIW there's a pretty simple unbound demo in the examples page linked to in my profile.
0

Featured Post

Vote for the Most Valuable Expert

It’s time to recognize experts that go above and beyond with helpful solutions and engagement on site. Choose from the top experts in the Hall of Fame or on the right rail of your favorite topic page. Look for the blue “Nominate” button on their profile to vote.

  • 5
  • 3
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now