Link to home
Create AccountLog in
Avatar of intechfs
intechfsFlag for United Kingdom of Great Britain and Northern Ireland

asked on

BerkeleyDB and Perl, lock not removed if crashed

I have a Perl script which I'm using with BerkeleyDB.

I want to make sure the database is locked whilst I read it or write to it.

I wrote some code that worked fine, but had a problem where if one part was crashed whilst it held a lock then the lock was never removed (I had to delete the files created by BerkeleyDB and start again to remove it).

I have now created some code to test this with.

One piece of code aquires a lock and then just sits in a loop forever. The other one tries to write to the database.

What I want to happen is that when the first piece of code is crashed the second one is then able to continue, but this isn't the case.

Can someone please tell me what I need to do to achieve this.
Code A
 
use BerkeleyDB;
use FindBin qw($Bin);
 
my $d_rsync = "$Bin/data/plugin-rsync.ydb";
 
#load the database
my $dbEnv;
unless ( $dbEnv = BerkeleyDB::Env -> new ( -Home => "$Bin/data" , -Flags  => DB_CREATE | DB_INIT_CDB | DB_INIT_MPOOL ) ) {
	Log ( $l_db , "failed to load database enviroment.\nReason: $BerkeleyDB::Error" );
	exit;
}
 
#load the data into a hash
my %rData;
my $hDb;
unless ( $hDb = tie %rData , 'BerkeleyDB::Hash' , { -Filename => $d_rsync , -Flags => DB_CREATE , -Env => $dbEnv } ) {
	Log ( $l_db , "failed to load database \'$d_rsync\'.\nReason: $BerkeleyDB::Error" );
	exit;
}
 
#acquire a read lock on the database
my $lock = $hDb -> cds_lock();
 
while (1) {
	sleep (1);
}
exit;
 
 
 
Code B
 
use BerkeleyDB;
use FindBin qw($Bin);
 
my $d_rsync = "$Bin/data/plugin-rsync.ydb";
 
#load the database
my $dbEnv;
unless ( $dbEnv = BerkeleyDB::Env -> new ( -Home => "$Bin/data" , -Flags  => DB_CREATE | DB_INIT_CDB | DB_INIT_MPOOL ) ) {
	Log ( $l_db , "failed to load database enviroment.\nReason: $BerkeleyDB::Error" );
	exit;
}
 
#load the data into a hash
my %rData;
my $hDb;
unless ( $hDb = tie %rData , 'BerkeleyDB::Hash' , { -Filename => $d_rsync , -Flags => DB_CREATE , -Env => $dbEnv } ) {
	Log ( $l_db , "failed to load database \'$d_rsync\'.\nReason: $BerkeleyDB::Error" );
	exit;
}
 
#acquire a write lock on the database
my $lock = $hDb -> cds_lock();
my $lCursor = $hDb -> db_cursor ( DB_WRITECURSOR );
 
#set the status in the database
$rData{'state'} = 'Scanning';
 
#remove the lock
$lCursor -> c_close();
$lock -> cds_unlock();

Open in new window

Avatar of Adam314
Adam314

Do you have only a specific set of programs that will be accessing the database?  If not, then you can't do what you are asking, as when any one program can't get a write lock, it won't know if some other program has crashed.
If you do have a specific set of programs (or just 1 program), it could look to see if any of those are running, and if not, assume they have crashed and left their lock in place.
Avatar of intechfs

ASKER

It is a specific set of programs, but I still don't think that would work because if script a and b are running and a is crashed even if i restart b and don't restart a it can't write to the database and just hangs until I delete the database files.
So when you say "a is crashed", you don't mean it's stopped running, you mean it is stuck somewhere, and not releasing it's lock.

If so, no - you can't do this.  They way you solve this is you get a lock just for when you need it, then release it.  You don't get and keep a lock for more than is necessary.
Well no I do mean its crashed. I'm keeping a lock for a very small amount of time, but on the off chance it crashes during that amount of time it doesn't release the lock and there is no way to get that lock removed without deleting the files.

So a is a script that holds a lock then loops so I can test the crashing with a lock on, and if I do crash a with a lock on then I don't get the lock released even if I then only start script b.                                                                                                                                                                                                      
if you know a and b are the only 2 processes that will be accessing the database, you could have script b check if the db is locked.  If it is, script b could look and see if script a is running.  If not, it could delete the files.
It could, but deleting the files also deletes all the data in the database and that then makes it lose everything it's been working so hard to find out.

So basically if a process crashes there is no way to then remove the lock it had. I'm not sure it is going to be a major issue because the locks are held for tiny amounts of time and only do small writes or reads before unlocking I just wanted to see if there was a way to fix it on the rare times when it happened.
ASKER CERTIFIED SOLUTION
Avatar of Adam314
Adam314

Link to home
membership
Create a free account to see this answer
Signing up is free and takes 30 seconds. No credit card required.
See answer
Yep you're right that does remove the lock and keep the data.

Thanks a lot for the help!