Surrano
asked on
Bacula concurrent jobs don't work
Server: ubuntu 12.04 LTS
Bacula version: 5.2.6
Backup device: C2 LTO-3 changer (with LTO-2 tapes but it shouldn't matter)
I try to configure Bacula to run two jobs simultanously: different clients, same storage. I *think* I followed the documentation but something must still be missing since the jobs keep being queued one after the other.
What I would like to achieve is that if there are two jobs scheduled for same time (i.e. BackupFiona and BackupDonkey) or started manually in fast succession, and one is e.g. 100G while the other one is 1G then the first one won't block the second one from being saved at the same time, thus finishing well in time.
Director config: (stripped to relevant parts)
Storage config: (also stripped)
FD config for client1: (client2 is pretty much the same)
Bacula version: 5.2.6
Backup device: C2 LTO-3 changer (with LTO-2 tapes but it shouldn't matter)
I try to configure Bacula to run two jobs simultanously: different clients, same storage. I *think* I followed the documentation but something must still be missing since the jobs keep being queued one after the other.
What I would like to achieve is that if there are two jobs scheduled for same time (i.e. BackupFiona and BackupDonkey) or started manually in fast succession, and one is e.g. 100G while the other one is 1G then the first one won't block the second one from being saved at the same time, thus finishing well in time.
Director config: (stripped to relevant parts)
Director { # define myself
Name = fiona-dir
Maximum Concurrent Jobs = 4
...
}
JobDefs {
Storage = "C2 changer"
Pool = Default
Priority = 10
SpoolData = yes
SpoolSize = 512M
...
}
Job {
Name = "BackupFiona"
JobDefs = "DefaultJob"
}
Job {
Name = "BackupDonkey"
Client = donkey-fd
FileSet = "Full Set Donkey"
JobDefs = "DefaultJob"
}
# Client (File Services) to backup
Client {
Name = fiona-fd
Address = fiona
...
}
Client {
Name = donkey-fd
Address = donkey
...
}
Storage {
Name = "C2 changer"
Address = fiona
Device = "C2 changer" # must be same as Device in Storage daemon
Media Type = LTO-3 # must be same as MediaType in Storage daemon
Autochanger = yes # enable for autochanger device
Maximum Concurrent Jobs = 4
}
# Default pool definition
Pool {
Name = Default
Pool Type = Backup
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
}
Storage config: (also stripped)
Storage { # definition of myself
Name = fiona-sd
Maximum Concurrent Jobs = 20
}
Autochanger {
Name = "C2 changer"
Device = "C2 LTO-3"
Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
Changer Device = /dev/sg1
}
Device {
Name = "C2 LTO-3"
Media Type = LTO-3
Archive Device = /dev/nst0
AutomaticMount = yes; # when device opened, read it
AlwaysOpen = yes;
RemovableMedia = yes;
RandomAccess = no;
Maximum File Size = 4GB
AutoChanger = yes
Alert Command = "sh -c 'smartctl -H -l error %c'"
SpoolDirectory = "/var/spool/bacula"
}
FD config for client1: (client2 is pretty much the same)
FileDaemon { # this is me
Name = fiona-fd
Maximum Concurrent Jobs = 20
...
}
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER