• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 830
  • Last Modified:

Bacula concurrent jobs don't work

Server: ubuntu 12.04 LTS
Bacula version: 5.2.6
Backup device: C2 LTO-3 changer (with LTO-2 tapes but it shouldn't matter)

I try to configure Bacula to run two jobs simultanously: different clients, same storage. I *think* I followed the documentation but something must still be missing since the jobs keep being queued one after the other.

What I would like to achieve is that if there are two jobs scheduled for same time (i.e. BackupFiona and BackupDonkey) or started manually in fast succession, and one is e.g. 100G while the other one is 1G then the first one won't block the second one from being saved at the same time, thus finishing well in time.

Director config: (stripped to relevant parts)
Director {                            # define myself
  Name = fiona-dir
  Maximum Concurrent Jobs = 4
  ...
}

JobDefs {
  Storage = "C2 changer"
  Pool = Default
  Priority = 10
  SpoolData = yes
  SpoolSize = 512M
  ...
}

Job {
  Name = "BackupFiona"
  JobDefs = "DefaultJob"
}

Job {
  Name = "BackupDonkey"
  Client = donkey-fd
  FileSet = "Full Set Donkey"
  JobDefs = "DefaultJob"
}

# Client (File Services) to backup
Client {
  Name = fiona-fd
  Address = fiona
  ...
}

Client {
  Name = donkey-fd
  Address = donkey
  ...
}

Storage {
  Name = "C2 changer"
  Address = fiona
  Device = "C2 changer"                     # must be same as Device in Storage daemon
  Media Type = LTO-3                  # must be same as MediaType in Storage daemon
  Autochanger = yes                   # enable for autochanger device
  Maximum Concurrent Jobs = 4
}

# Default pool definition
Pool {
  Name = Default
  Pool Type = Backup
  Recycle = yes                       # Bacula can automatically recycle Volumes
  AutoPrune = yes                     # Prune expired volumes
  Volume Retention = 365 days         # one year
}

Open in new window


Storage config: (also stripped)
Storage {                             # definition of myself
  Name = fiona-sd
  Maximum Concurrent Jobs = 20
}

Autochanger {
  Name = "C2 changer"
  Device = "C2 LTO-3"
  Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
  Changer Device = /dev/sg1
}

Device {
  Name = "C2 LTO-3"
  Media Type = LTO-3
  Archive Device = /dev/nst0
  AutomaticMount = yes;               # when device opened, read it
  AlwaysOpen = yes;
  RemovableMedia = yes;
  RandomAccess = no;
  Maximum File Size = 4GB
  AutoChanger = yes
  Alert Command = "sh -c 'smartctl -H -l error %c'"
  SpoolDirectory = "/var/spool/bacula"
}

Open in new window


FD config for client1: (client2 is pretty much the same)
FileDaemon {                          # this is me
  Name = fiona-fd
  Maximum Concurrent Jobs = 20
  ...
}

Open in new window

0
Surrano
Asked:
Surrano
  • 2
1 Solution
 
SurranoSystem EngineerAuthor Commented:
Somehow solved... Not sure what made the trick but the most prominent change is that I added Maximum Concurrent Jobs to the Device in SD conf as well. I have the feeling that it had to be followed by a series of restarts / config reloads for some reason.
0
 
SurranoSystem EngineerAuthor Commented:
0 points for myself :)
0

Featured Post

Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now