Link to home
Start Free TrialLog in
Avatar of Bernard Beauvoir
Bernard BeauvoirFlag for United States of America

asked on

What is the best way to increase the performance of an older access 97 application?

Over the last ten years I've developed a rather comprehensive medical office application in Access 97. There is a front end mdb on each workstation and a back end mdb on a windows 2003 Server. Over the last six months the performance of this application has become progressively slow. I believe this is due in part to a big increase in the number of users (25%) as well as a large increase in business (i.e. more patients, transactions etc). It should be noted that the application performs nicely as long as there are just a few users (8-10). The degraded performance always occurs when 12-18 users are accessing the application simultaneously. I haven't performed any actual tests but I suspect that some of the queries are moving a lot of data across the network. This in turn is slowing down the application as well as other network related tasks such as opening documents on the server.

I think it might be helpful for you to know some specifics about the two main tables:

The transaction table contains 271,042 records. This table has 22 fields.
The Patient table contains 29,429 records. This table has 86 fields.


I really need to come up with a solution that will speed things up. I would prefer to keep the front end application on Access 97 and not rewrite it all. The client is amenable to a SQL server solution. I'm wondering about the most expedient way to increase the performance:

Should I simply move all the back end tables to SQL server and link to them via an odbc connection in Access?
Should I use passthrough queries?
Should I create views on the SQL server and access them from the front end?
Would simply upgrading to MS Access 2010 increase performance?

Thank you in advance!
Avatar of Patrick Matthews
Patrick Matthews
Flag of United States of America image

microone,

I would suggest that staying on Access 97 is not really an option.  Microsoft stopped supporting that product years ago, and running a critical app on an unsupported platform is, in my opinion, begging for trouble :)

86 fields for the Patient table sounds like a lot; when I hear a number like that, I start thinking that revisiting the data model might make sense.

Long term, I would recommend that you get this into a SQL Server back end, make all of your forms unbound, and rely on SQL Server views and stored procedures (or at the very least, pass through queries) to do the heavy lifting.  That is easier said than done, of course, and will thus probably require substantial redevelopment and testing.

Patrick
Agree with Patrick, but just wanted to comment that 86 fields is not necessarily bad for a Patient table. Having implemented several commercial clinical systems, I've seen properly normalized patient databases with 100+ attributes in the Patient table. The patient directives alone (IV hydrate, DNR) can chew up quite a few fields.

Get it into SQL Server or Oracle so you can better profile what is going on, and make sure to examine your queries to ensure you have proper indexes.
ASKER CERTIFIED SOLUTION
Avatar of mrjoltcola
mrjoltcola
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Note also that most Access developers consider about 15+ users to be the max number of concurrent users on a Jet database, although you can certainly have apps that support more users. Anytime I start to hear a client mention "10 to 15 users" my first thought is moving to a more robust database platform, like SQL Server.

With that said, I've also seen several apps that began to fall over when they reached 10 users, depending on the complexity of the system. As others have said, you might try beefing up the machines and network, but long term you'll need to move to a more robust data platform (at least in my opnion).
>>I would prefer to keep the front end application on Access 97 and not rewrite it all. The client is amenable to a SQL server solution.<<
Unless you are prepared to re-analyze and re-write your app to take advantage of SQL Server, don't bother migrting your data to SQL Server.  You will only be frustrated and disappointed with the results.  Again, if you do not make any changes, don't be surprised if it is actually slower than on MS Access.
<<The degraded performance always occurs when 12-18 users are accessing the application simultaneously. I haven't performed any actual tests but I suspect that some of the queries are moving a lot of data across the network. This in turn is slowing down the application as well as other network related tasks such as opening documents on the server.>>

  Things I'd look at:

1. You are compacting and repairing on a regular basis.
2. Your indexing.  As data grows, indexing becomes more critical.  Look to the database analyzer to see if it suggests anything.
3. If your enforcing RI, make sure you have NOT defined an index on the the many table.  When enforcing RI, JET creates a hidden index on the many side for you.  If you do it on top of that, you have a duplicate index and un-needed overhead.
4. Depending on how the app was developed, this may or may not help, but at startup open a hidden form bound to a table in the BE and don't close it till the app quits.

    If your FE closes all connections to the BE at any point, then the connection to the BE is closed.  As soon as you access a table, it's opened again.  This repeated close/open of the BE can consume a lot of overhead.

5. Check your app for DLookups().  A Dlookup() should never appear in a query or SQL statement.  Also check for multiple occurances of them. ie.

  =DLookup(...)
  =Dlookup(...)
  =Dlookup(...)

  Replace these by opening a recordset, fetching a record, and getting all the field values at one time.  Dlookups() in a SQL statement or query should be replaced with an appropriate join and SQL.

  While the above may get you going, I would agree with the others though in moving onto something newer.  I'm also a great fan though "if it's not broke, don't fix it".  I see nothing wrong with sticking with A97 even thought Microsoft has been over and done with it for a while now.   Bugs and security holes are known, and if it provides all the development that you need, then really there is no reason to replace it.

   My main concern would be the number of users; generally by the time you start getting close to 20, having an app kick out for any reason costs a LOT in time and money.   A more robust BE such as SQL server would be good.  So I would look at a new BE for sure.

<<Unless you are prepared to re-analyze and re-write your app to take advantage of SQL Server, don't bother migrting your data to SQL Server.  You will only be frustrated and disappointed with the results.  Again, if you do not make any changes, don't be surprised if it is actually slower than on MS Access. >>

  It really doesn't take a full re-write as I would have also thought in the past and I don't agree with the second part of this comment at all.   More often then not, just moving the data to SQL Server and switching to linked tables in the FE will give a performance boost.  There may be a query or two that needs to be changed, but for the most part, it works and works better then what you have now.

  Beyond that, in the past I would have said you need to look at re-writting things, but with a couple of the tips found here:

"The Best of Both Worlds: Access-SQL Server Optimization"
http://www.jstreettech.com/cartgenie/pg_developerDownloads.asp

  you'd be surprised how well you can do with just a few modifications.  Of course to full harness the power of SQL, you'd need some extensive re-work.    However that can be done over time.

  My suggestion here would be to simply try it.  Download SQL Server Express and with a copy of the DB, move the data into SQL and then run a test with just ODBC linked tables in the front end.

JimD.

 BTW, one other thing; make sure the server is not running out of physical memory.  That will really slow down operations and is something you can creep into without realizing it (especially if on SBS).

JimD.
>> My suggestion here would be to simply try it. <<
Fair enough and I agree.  There is not a better way to learn.  Just make sure to come back and tell us your experiences.
Avatar of Bernard Beauvoir

ASKER

After reviewing all of the helpful information offered here, I decided that what I really needed was a short term strategy and a long term strategy. The long term strategy will be outlined later.

My short term strategy was to significantly increase the performance of the application ASAP. The application had become rather frustrating to use, depending on the load ...and some of the staff had begun to give me unappreciative glares. With this in mind, I decided to try the following suggestion right away:

I installed an inexpensive ($150) gigabit switch and two gigabit NIcs from the same manufacturer. Installing one NIC in the server and the other one in a PC, I ran a series of tests on the gigabit connected PC and a 10/100 connected PC. The gigabit connected PC showed a marked performance increase over it's 10/100 connected counterpart. Response times were cut by at least half! Emboldened by this success, I installed another eight NICs with pretty much the same results. Since this testing was done after hours with no one else on the system, I was eager to see if I would get similar results under live "full load" conditions.

The next day I ran the same tests while the entire staff was present and using all the different applications. There was indeed still a big difference in performance between the two test PCs, but both PCs were slower than they had been the night before!

Using another of the suggestions posted here I decided to install another hard drive (on another controller) to host ONLY the MS Access back end. I suspected that the access application was competing with other applications for the server's sole hard drive. (Ive since learned about disk queue length...) I installed a 160GB, 7200RPM, SATA II unit. With this configuration, the Access application ran almost as fast under load as it did when not under load. In fact, the application seemed to be running almost as fast as it had ever run ... and the staff seemed a lot friendlier.

For the most part I felt that I'd successfully implemented my short term strategy...with one telling exception.

This company had recently expanded into a new building. As it turned out, the only way to connect the main building to the new building was via a wireless bridge with a throughput of only about 60mbits/sec. As a result, the three users in this new building didn't really benefit from the gigabit network installed in the main building. Indeed, their response times were 3-4 times slower than their main building counterparts. I reasoned that these slow response times could only be attributed to the slower network connection, since everything else was the same. I had to conclude that the application was moving too much data across the 60mbit/sec wireless bridge and creating a bottleneck.

This caused me to take a more critical look at the application, particularly ways to reduce the amount of data flowing out of it. Per a suggestion posted here, I focused on table indexes. Adding a secondary index to some of the larger tables provided the largest performance boost. Nearly every query that used those tables saw a performance boost with response times now being "borderline acceptable".
 
In reviewing all the data to date, I've come to the following conclusions: (I welcome your inputs)

1) The major queries move a lot of data from the server to the workstations. Upgrading the network to gigabit speed eliminated this problem for every user in the main building.

2) The major queries keep a hard drive pretty busy with read requests when there are many users accessing the application. Putting the back end on it's own controller and hard drive alleviated this problem.

3) Proper indexing becomes VERY important in low bandwidth environments.

At this point, I've decided that the back end should be hosted on SQL Server in a true client/server environment and that is my long range goal. While the faster network has all but eliminated slow response times in the main building, the users in the new building are still suffering from just barely acceptable response times.

I'm hopeful that response times will be reduced by having SQL Server perform most of the heavy lifting. I'm also hopeful that response times will be improved by decreased network traffic since SQL Server will only deliver the data requested by the workstation.

I've already "upsized" the access back end to SQL Server 2008 and have ported over a couple of simple queries. Some parts of the application will port rather easily to SQL Server. Other parts will require some serious reworking. Fortunately, I'm not as much under the gun now to complete everything.

I welcome your inputs!

Thanks to everyone who offered suggestions!



     
>>At this point, I've decided that the back end should be hosted on SQL Server in a true client/server environment and that is my long range goal. <<
Correct.  If your goal is to remain on SQL Server, your only hope of a scalable application is rewriting it, so that only the necessary data is sent across the network.  You cannot realistically continue to expect an application written for MS Access to perform better on SQL Server.  It just will not happen.  You can add all the band-aids and workarounds you like, but in the end I suspect the app on MS Access still outperforms the one on SQL Server, albeit both of them running faster.