Replication of files from Primary and the Secondary Linux servers

Posted on 2016-10-09
Last Modified: 2016-10-26
Hallo Friends,
I have two Red Hat 7 virtual servers. One is Primary and the other one is a Secondary. Need to have  replication of files from Primary and the Secondary.  Please suggest few options and methodologies for doing the same. Believe one is SRDF but teh support team doesn't support that.

Question by:flextron
  • 5
  • 5
  • 2
  • +2
LVL 28

Expert Comment

by:Jan Springer
ID: 41837398
I use rsync and it works very well.

Author Comment

ID: 41837422
Thanks Jan....
Are there any steps available or any resource URL where it can be found.
LVL 28

Expert Comment

by:Jan Springer
ID: 41837428
1) install rsync
     depending upon your initial installation, it may already be installed
     do a "which rsync" to check
     or install as root "yum install rsync"

2) create a script of what needs to be synced

3) as root, generate public and private key on the originating server

4) put the public key into the .ssh/authorized_keys folder for root on the other server

5) create a cron job to do sync

I can help with any or all of this.
Don't miss ATEN at NAB Show April 24-27!

Visit ATEN at NAB Show to learn how our "Seamlessly Entertaining" solutions deliver fast, precise video streaming without delays for the broadcasting and media environment. ATEN will showcase its 16x16 Modular Matrix Switch (VM1600) and KVM Over IP Solution (KE6900 series).


Author Comment

ID: 41837451
Hallo Jan
When I do the command "which rsync" I get : /usr/bin/rsync  
Guess that means it is installed already.

Not sure how to do step 2 and 5.
The Folders which needs to by synced is /data
LVL 28

Accepted Solution

Jan Springer earned 250 total points
ID: 41837481
update iptables or firewalld (whichever you're using) to restrict ssh by only authorized IPs/subnets.  this is extremely critical as I am going to walk you through using keys as root to login in to the other server.

install fping
      yum install fping

verify that its installation location (needed for the script below)

      which fping

create your ssh keys and enter through everything, you'll need it passwordless

        cd /root

copy the public key information

        cat .ssh/

ssh to the other server, insert the public key and save

        ssh root@IP_of_secondary_server
        cd .ssh
        vi authorized_keys
now you should be able to ssh as root to the other server without requiring a password.  the first time that you do so, you'll be prompted to accept the fingerprint.  Do so and know that, under normal circumstances, you should never see that message again.  if you do, beware of a man in the middle attack.

     vi /usr/local/bin/syncfiles

a typical script would look like this



     if [ "$RESULT" == " is alive" ]; then

             /usr/bin/rsync -av --delete-after -e ssh /data/ root@

     exit 0

And a cron job

    vi /etc/cron.d/syncfiles

Insert this line and save.  change this to whatever frequency  you need

    30 0 * * * root /usr/local/bin/syncfiles > /dev/null 2>&1

systemctl restart crond

Author Comment

ID: 41837560
Hallo Jan,
The server is not connected to internet. Only intranet,
Hence yum install won't work . !
Also, can you please explain a bit more about teh cron part
LVL 28

Expert Comment

by:Jan Springer
ID: 41837569
cron is a service that runs jobs at defined times.

do you have the original DVD?  

you can install from that.
LVL 37

Assisted Solution

ArneLovius earned 250 total points
ID: 41837575
@flextron, do you mean this  for SRDF ?

As Jan has said, rsync is _the_ application for unidirectional replication between different systems across virtually any type of link, depending on the size of the filesystem (number of files and directories) the time it takes rsync to "crawl" the filesystem may be an issue.

Just as an alternative idea, zfs has "zfs send" and "zfs receive" which like rsync can also be used to unidirectionally replicate a filesytem, and as these can also be used for snapshots, for very large filesystems, this can be considerably faster than rsync for keeping a replica up to date. The issue being that this only works on a zfs filesystem...
LVL 27

Expert Comment

ID: 41838143
you can also use gluster which will be a little slower than a regular fs but will replicate in near real-time and replicates in both directions

you can use this doc
- their first 2 steps make you create a dedicated partition for gluster but you can use an existing directory
- note that you should not use the directory behind gluster directly but rather mount your glusterfs on both hosts on separate directories. if that's unclear and you're interested, i'll elaborate.
- the key to the setup is in step 5 : "replia 2" instructs gluster to keep 2 copies of each file and since you only have 2 hosts, you end up with mirroring


there are many other options such as replicating fusefs or stuff based on inotify

i've used unison with good results as well. it's only better compared to rsync if you need bidirectional sync

zfs will provide block-level replication ( and a crazy number of other features you probably won't use if your server is an existing virtual linux server ). note that it is much more efficient with small files than traditional block-based replications.

rsync is mainly interesting for it's versioning capabilities. but note that rsync is not suitable for frequently syncing a huge number of files.
LVL 40

Expert Comment

ID: 41838172
drbd might be an option on the block level if needed.

If you need synchronous replication from primary to secondary then drbd is the only option.
LVL 27

Expert Comment

ID: 41838193
gluster and a wealth of other options allow synchronous replications between 2 hosts
obviously with synchronous replication comes latency

Author Comment

ID: 41838371
@Jan Springer: IS fping absolutely necessary. Will tnsping won't work ?
LVL 28

Expert Comment

by:Jan Springer
ID: 41838374
Any ping that you can specify a count of one, returns a one-line answer and the answer is consistent will work.

Author Closing Comment

ID: 41861554
Sorry for the delay ...but yes.. this is a way of keeping things in sync. Other way I believe would be having an enterprise scheduler like COntrol M or Tidal to have a job which copies the  files from Prod to DR on a nightly basis.

Featured Post

Free Tool: Postgres Monitoring System

A PHP and Perl based system to collect and display usage statistics from PostgreSQL databases.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Title # Comments Views Activity
Ubuntu not booting - How get past GRUB? 3 84
How to use question mark (?) in filename with html 25 101
Apache module 5 67
Disabling security updates Ubuntu 3 46
Network Interface Card (NIC) bonding, also known as link aggregation, NIC teaming and trunking, is an important concept to understand and implement in any environment where high availability is of concern. Using this feature, a server administrator …
Fine Tune your automatic Updates for Ubuntu / Debian
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

730 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question