File locking / share access

I have a database of registered users of my site. Every user can update his information, and it goes back to the same place. How should I organise that database so that there won't be any problem with the simultaneous access of multiple users? I am running Apache on Linux, Perl. Can use mSQL, xBase and, of course, plain text or tied hash.
Should I create a "lock" file and wait in a loop until that file is deleted?
Note: the "Users" database is just a sample. Really, I am talking about a file (table) that is being read/updated many times per second, and that's why I expect problems with sharing.
Thank you!
tivnet010800Asked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

tivnet010800Author Commented:
Edited text of question.
0
cadabraCommented:
================================================================================
Note that the following pseudo-code is not reliable:


while true
  if (not exists LockFile) then
    CreateLockFile
    exit-while
  end if
  sleep
end while
AccessDataFile


As in multi-processing, a different process may create the lock file between the if.. and the CreateLockFile..

The code below uses sysopen, which trys to create a file, and fails if it already exists.

  use Fcntl;

  if (sysopen(f1, $path, O_WRONLY + O_EXCL)) or die "Couldn't open $path for writing: $!\n";

This way you can loop until a LockFile is created succesfully. When you are done accessing the data-file, you delete the LockFile.

The problem with lock-files is that you have to do polling and waiting for them to be released (which seems unreasonable for accesses multiple times a second), and there is always a chance that a lock-file stays on disk because of a bug in your code or some other mishap. Then data would be inaccessible, until you manually remove the lockfile.

If flock(2) is implemented on your machine, you can try something like:

  $LOCK_SH = 1;
  $LOCK_EX = 2;
  $LOCK_NB = 4;
  $LOCK_UN = 8;
  open MyFile, ">>PathToMyFile"
    or die "Can't open MyFile: $!";
  flock MyFile, $LOCK_EX;
  print MyFile "here is a new line\n";
  flock MyFile, $LOCK_UN;


You can try implementing the critical section with semaphores:
http://theory.uwinnipeg.ca/CPAN/perl/pod/perlfunc/semop.html

Depending on the number of users in your system, you can consider having a dedicated data-file for each user.

If you use databases, they have internal locking capabilities, and usually give you control over types of locks (exclusive, shared on row, page, table). You can also control timeouts for waiting on locks.


Hope this helps,
Cadabra
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
ozoCommented:
perldoc -f flock
perldoc -q lock
0
tivnet010800Author Commented:
Summary of cadabra's answer (which is good): locking is not very reliable, use databases. Question: if, let's say I start with mSQL - will it be fast enough to allow select/insert/delete with row/table locking many times per second?
What if I write a daemon that will process all those requests?
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Perl

From novice to tech pro — start learning today.