We help IT Professionals succeed at work.

How to sync a BackupPC pool offsite

3,920 Views
Last Modified: 2013-12-01
Information in the wild appears to be sparse on this one, but it seems crucial to anyone serious about a solid backup chain.

I have a working BackupPC installation from my various Windows boxes to my Ubuntu server.  Everything works fantastic.  Now I need to get that data offsite for the nebulous disaster scenario management likes to harp on.

I have tried rsync in various configurations (I am running 3.0.6, my target server, which I cannot reconfigure, is running something older using protocol 29).  All attempts have failed.  I tried -aH as there are a crap ton of hard links and many variations on that theme.  I have a little over 2 million files discovered by rsync to move through the pipes, and it is able to create the folder structure on the remote machine, but then it goes through and says its transferring a bunch of stuff and eventually hangs and fails.  On the server, no files were ever received.  Rsync is my preferred solution and I'm hoping to be enlightened by some log files soon.

Alternatives are very welcome.  Even a two step process of creating a tar and sending that on its way is fine, I just can't think of how to do that incrementally so I'm not trying to transfer 200GB+ every night.  I'm working through a dial-bonded T1 connection.  No bueno.

Solutions or once that even point me towards potential solutions will be a *huge* help at this point.  The internet seems not to care about offsite backups for this solution that don't involve  RAID hacking or USB nonsense.
Comment
Watch Question

Commented:
im using

rsync -avz /directory/to/backup/ remote_server:/backup/directory/

Commented:
you can use tar

tar cvfj - /directory/to/backup | ssh remote_server '( cd /backup/directory/;tar jxvf - )'

Commented:
another methos is to create nfs mount which point to the backup destinaion and backup to it using rsync
rsync -avz /directory/to/backup/ /backup/mount/directory/

Commented:
remove the v option if you dont need logging ( what was backed up ) .it should work faster.
rsync will check if the files are newer then the one on the destination and onl then copy.
it means after the first backup it will be much faster
First, can you tell me target's and destination's operating systems?
Second, what is the problem exactly? I mean, which is the point of your restore you are failing at ?

Author

Commented:
Both systems are debian based.  In the most important sense my sync is failing immediately.  After going through discovery it starts the transfer, and gives progress on the completion and transfer rate, but no files arrive at the server.  After less than 100 files it hangs and either exits with an "unknown error" or requires an interrupt to quit.

My other rsyncs are working fine and I am using the stock configuration of backuppc.  I don't understand how this can be such a rare issue.  The _only_ potential oddity, simply because its different, is that my pool is on an XFS drive.  I don't know what filesystem is on the server.
can you post the rsync command you issue?

Author

Commented:
sudo rsync -r -t -p -o -g -H --delete-during -z --protocol=29 /SOURCE USER@HOST:~/PATH
first, replace -r -t -p -o -g with -a

second, why do you use --protocol switch?

third, add slashes after the folders

Author

Commented:
The folder arguments are fine, I just didn't copy them here properly.  And -a will do all that, of course, I've just been trying to tweak the settings individually to see if one or more of them is the cause of my issue.  I use the protocol switch because the target machine is running 2.6.3 or some other version 29 rsync.  I know I can drop it now with rsync 3.0.6, but I keep it to remind myself what server I'm working with.  Incidientally, I have tried it without the prot switch and I get the same result.  That really only affects how the file comparison is performed and how the transfer happens.
The filesystem shouldn't be a problem.. add -v and --stats switches to the cli to see if we can get more clues

Author

Commented:
The process is failing earlier, now, right at the start of remote file comparison.  I did an strace but because of the number of files the output is rather gigantic.  If there's anything in there that I could look for specifically that would be fantastic.
This problem has been solved!
(Unlock this solution with a 7-day Free Trial)
UNLOCK SOLUTION

Author

Commented:
I'm going to close this down as it turns out the problem was completely different than I supposed.  The server side rsync was being killed by a resource governor that didn't like the super heavy ram/cpu usage of rsync 2.6.3.  Using 3.0.x with another 3.0.x solves the problem because of the new incremental file comparison and more efficient checking algorithm.  Thanks for your help.
well.. thanks

Gain unlimited access to on-demand training courses with an Experts Exchange subscription.

Get Access
Why Experts Exchange?

Experts Exchange always has the answer, or at the least points me in the correct direction! It is like having another employee that is extremely experienced.

Jim Murphy
Programmer at Smart IT Solutions

When asked, what has been your best career decision?

Deciding to stick with EE.

Mohamed Asif
Technical Department Head

Being involved with EE helped me to grow personally and professionally.

Carl Webster
CTP, Sr Infrastructure Consultant
Empower Your Career
Did You Know?

We've partnered with two important charities to provide clean water and computer science education to those who need it most. READ MORE

Ask ANY Question

Connect with Certified Experts to gain insight and support on specific technology challenges including:

  • Troubleshooting
  • Research
  • Professional Opinions