Do Multi-core processors cause "Record in Use" errors in Visual FoxPro?

We have an 11 year old Visual FoxPro application that this company developed.  I started working on this 2 years ago and I am noticing a lot of the same consistent errors at our customer sites.  The application is a single thread single tear application that accesses a local Visual FoxPro database.  The users access the application by logging into a server via Terminal Server.  So the standard setup is one BIG server with the app and DB local with multiple terminal server sessions accessing the app.  In the last two years my team has addressed a lot of the big errors but we are baffled by the “record in use” error.  Our larger clients (20 to 60 users) appear to encounter it much more than the smaller clients.  When they encounter the error, we reindex tables in the database and it makes the error go away.  However, within weeks the errors come back.  We then reindex again and it works.  I know, you’re asking yourself why we don’t have them reindex every night.  Well we can but that does not solve the problem it just hides it.  I would prefer to fix the problem.

I hear from others in the company that it appears that we have been having more of these error in the past 4 years than ever before.  The main difference that we have is really the hardware.  Now our customers have much bigger, faster servers.  I believe that the main culprit is the speed of the processors and the multi-core hardware.  I have found documents about Visual FoxPro and multithreading that lead me to this suspicion.  According to Microsoft docs, every time a “select” statement is made, the OS runs that in another thread if available.  Without coding for that, it appears to me that this could cause errors like “record in use” and “sequence out of sync”.

If there is anyone out there that knows VFP, can you shed some light on this for me?  Am I on the right track or out to left field?
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

CaptainCyrilFounder, Software Engineer, Data ScientistCommented:
This error happens when you are trying to open a file when it is in use.

Are you sure you are opening the files in shared mode?

smithwahlAuthor Commented:
Yes, we do use SET EXCLUSIVE OFF
"Record in use" can be caused by delays in data processing. One app instance is writing data (and updating indexes) and appropriate records are locked whereas another instance is waiting for record lock and if this waiting period exceeds timeouts set by SET REPROCESS and other commands then above error appears. Large and unbalanced indexes slow down the data access for sure.

You should investigate why delays occur in your app. Maybe index files are too large and their update is time consuming, maybe antivirus software is blocking file access, maybe ...

Higher number of concurrent users must have bad influence on this error. You should record some statistics which will say where to begin. Does the error occur on all DBF files or on just some of them? Does the DBF and index size affects error occurence? How often are you cleaning (PACKing) your data?

How often the  “sequence out of sync” appears? This error means corrupted index file. It can happen when the application is forced to disconnect from its data. Do you always close your app standard way or could it happen users just switch computer off? You should record all logins and logouts and if you don't find apropriate pairs then you have to force REINDEX operation.

Hardware speed should not have bad influence. Number of threads used for SQL commands is irrelevant. If you are using terminal services then SQL commands from different clients are probably executed on different CPU kernels but the bottle neck is physical data access on disk which depends on file system and its speed optimization.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Acronis True Image 2019 just released!

Create a reliable backup. Make sure you always have dependable copies of your data so you can restore your entire system or individual files.

Are you using table buffering. That would decrease the record locking time. What is the value of SET REPROCESS. Maybe you should increase it.
CaptainCyrilFounder, Software Engineer, Data ScientistCommented:
It also happened with a client of mine who just disabled scan on DBF, FPT and CDX files or the whole Fox Data folder and it worked like a charm.
Olaf DoschkeSoftware DeveloperCommented:
Multicor processors would not be the reason. A VFP EXE will have some threads, but data access within the same VFP EXE will not be done using more than 1 core, if you don't explicitly use multithreading eg via and then do process the same DBF within several threads.

Even if, the "Record in Use" basically is hinting a file locking failure of the process wanting to write to a record and that typically would be from some other user, which is not only within another thread or process, but even from another computer. So that is not a multicore or threading problem.

You are aware, that SET EXCLUSIVE is among the settings session specific, and you need to redo SET EXCLUSIVE OFF everytime you start a new data session, eg by a form set to private data session or by a Session class?

As you experience, the problems rise with larger data and more users. It's more of a structural problem, than configuration, if it's as consistent, as you observe, then probably you haven't cared much about automatic or manual locking and conflict handling of failed tableupdates. Eg TABLEUPDATE() does report update conflicts. Tushar already mentioned REPROCESS.

The basic problem is, that only one user can have the write access to a DBF file at a time, so keep automatic or manual locks as short as possible, that's the one thing you can do, the other is to increase timout for a lock to inifinite. After all the write access must be serialised somehow, and there is no server side component, which does that, so the way is to lock the file.

On the file system and network protocol side, SMB2 and especially opportunistic file locking is known to be problematic with any file based database, be it VFP DBFs or Access MDBs.

Bye, Olaf.
smithwahlAuthor Commented:
The question was answered but no good reason was given to counter the conclusion that the evidence suggest.  I am satisfied with the answer.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft Applications

From novice to tech pro — start learning today.