I'm using a split database with one client and multiple data files, ocx's dll's etc etc.
The complete set of client and data is approx 160mb. From this I manage an additional 20GB of data (not stored inside access)
One of the data files contains replicated tables only, and is approximatly 60mb.
All databases are in Access2003 format.
Periodically when I try to attach tables or access the tabledefs collection (or similar with he querydefs collection) from my client, it takes several seconds to refresh or add a queydef/tabledef. Usually the same action is instantaneous. My client uses about 50 tables spread across the data files, which it links to once only during startup. Usually the linking process takes about one second, but can blow out to several minutes.
Also the problem seems to be exacerbated by the synchronisation process.
Why would it work so fast MOST of the time, then slow to an absolute crawl others???
If I create an empty database file and save as DM1 then import all the data from the original design master into DM1
THEN create ANOTHER empty database file, save as DM2 then import all the data from DM1
Then If I promote DM2 to be the new design master and recreate the replica set, then repeat this double importing process for all my other data files, the system runs the way it used to (before the upgrade to 2003) and will work fast for a few days, then fall in a heap again.
Also the client and data files are issued to mutltiple users at various sites.
Compacting the databases doesn't help. Defrag HDDs doesn't help.
Only recreating a complete new set of mdb's temporarily fixes the problem.
Synchronising the replicated data file seems to exacerbate the problem.
Shutting down the client and re-opening the client after having it invoke the syncronisation of the replicated data file, sometimes temporarily aleviates the problem
Dazed and confused...