Link to home
Start Free TrialLog in
Avatar of GoTravel
GoTravel

asked on

How to sync a BackupPC pool offsite

Information in the wild appears to be sparse on this one, but it seems crucial to anyone serious about a solid backup chain.

I have a working BackupPC installation from my various Windows boxes to my Ubuntu server.  Everything works fantastic.  Now I need to get that data offsite for the nebulous disaster scenario management likes to harp on.

I have tried rsync in various configurations (I am running 3.0.6, my target server, which I cannot reconfigure, is running something older using protocol 29).  All attempts have failed.  I tried -aH as there are a crap ton of hard links and many variations on that theme.  I have a little over 2 million files discovered by rsync to move through the pipes, and it is able to create the folder structure on the remote machine, but then it goes through and says its transferring a bunch of stuff and eventually hangs and fails.  On the server, no files were ever received.  Rsync is my preferred solution and I'm hoping to be enlightened by some log files soon.

Alternatives are very welcome.  Even a two step process of creating a tar and sending that on its way is fine, I just can't think of how to do that incrementally so I'm not trying to transfer 200GB+ every night.  I'm working through a dial-bonded T1 connection.  No bueno.

Solutions or once that even point me towards potential solutions will be a *huge* help at this point.  The internet seems not to care about offsite backups for this solution that don't involve  RAID hacking or USB nonsense.
Avatar of 0ren
0ren
Flag of Israel image

im using

rsync -avz /directory/to/backup/ remote_server:/backup/directory/
you can use tar

tar cvfj - /directory/to/backup | ssh remote_server '( cd /backup/directory/;tar jxvf - )'
another methos is to create nfs mount which point to the backup destinaion and backup to it using rsync
rsync -avz /directory/to/backup/ /backup/mount/directory/

remove the v option if you dont need logging ( what was backed up ) .it should work faster.
rsync will check if the files are newer then the one on the destination and onl then copy.
it means after the first backup it will be much faster
Avatar of ai_ja_nai
First, can you tell me target's and destination's operating systems?
Second, what is the problem exactly? I mean, which is the point of your restore you are failing at ?
Avatar of GoTravel
GoTravel

ASKER

Both systems are debian based.  In the most important sense my sync is failing immediately.  After going through discovery it starts the transfer, and gives progress on the completion and transfer rate, but no files arrive at the server.  After less than 100 files it hangs and either exits with an "unknown error" or requires an interrupt to quit.

My other rsyncs are working fine and I am using the stock configuration of backuppc.  I don't understand how this can be such a rare issue.  The _only_ potential oddity, simply because its different, is that my pool is on an XFS drive.  I don't know what filesystem is on the server.
can you post the rsync command you issue?
sudo rsync -r -t -p -o -g -H --delete-during -z --protocol=29 /SOURCE USER@HOST:~/PATH
first, replace -r -t -p -o -g with -a

second, why do you use --protocol switch?

third, add slashes after the folders
The folder arguments are fine, I just didn't copy them here properly.  And -a will do all that, of course, I've just been trying to tweak the settings individually to see if one or more of them is the cause of my issue.  I use the protocol switch because the target machine is running 2.6.3 or some other version 29 rsync.  I know I can drop it now with rsync 3.0.6, but I keep it to remind myself what server I'm working with.  Incidientally, I have tried it without the prot switch and I get the same result.  That really only affects how the file comparison is performed and how the transfer happens.
The filesystem shouldn't be a problem.. add -v and --stats switches to the cli to see if we can get more clues
The process is failing earlier, now, right at the start of remote file comparison.  I did an strace but because of the number of files the output is rather gigantic.  If there's anything in there that I could look for specifically that would be fantastic.
ASKER CERTIFIED SOLUTION
Avatar of ai_ja_nai
ai_ja_nai
Flag of Italy image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I'm going to close this down as it turns out the problem was completely different than I supposed.  The server side rsync was being killed by a resource governor that didn't like the super heavy ram/cpu usage of rsync 2.6.3.  Using 3.0.x with another 3.0.x solves the problem because of the new incremental file comparison and more efficient checking algorithm.  Thanks for your help.
well.. thanks