marrowyung
asked on
moving 5.3TB of data from one server to another server/same server in limited time
hi,
we try to COPY a very large folder, 5.3TB to another location of the same server/ different server, and that folder is operating everyday and the file may be locked during copy.
someone suggest rsync for it!
might I know if Rsync can full copy and differential copy so that service down time can be limited ?
what is the command for full and differential copy including permission
?
or any other method for 5.3TB of data copy ?
what is the official site to download rsync for Windows server 2012 R2 ?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
CompProbSolv ,
and robocopy can do full and differentail copy ?
Dr. Klahn ,
Many files in a folders and we have to copy 2x folders like this for EACH service we trying to migrate.
so we have to do 2 x copies for 4 x folders.
for database files we have 1 x single folder, which is 426GB in size! that is a PostgreSQL and we tested we can simply copy and mount to other disk!
we will see message during and at the end of the process, right?
tks.
then rsync or robocopy can do the job.
the point now is,we want to do full and differential copy of files(seems impossible!), so that we can minimize the down time by only copy the differential one and them mount it up.
so how can it be done ?
Have you looked at robocopy (part of Windows)?not yet ! so it is as good as rsync? but between windows and linux there are no choice and only rsync do the job ?
and robocopy can do full and differentail copy ?
Dr. Klahn ,
One file or many files?
Many files in a folders and we have to copy 2x folders like this for EACH service we trying to migrate.
so we have to do 2 x copies for 4 x folders.
if anything happens to it then the entire database fails.
for database files we have 1 x single folder, which is 426GB in size! that is a PostgreSQL and we tested we can simply copy and mount to other disk!
if anything happens to it then the entire database fails.
we will see message during and at the end of the process, right?
On the other hand, if the database is broken into at least ten individual files, the situation becomes more tenable as to copying it - and a 100 GB file can be transferred "relatively" quickly.
tks.
then rsync or robocopy can do the job.
the point now is,we want to do full and differential copy of files(seems impossible!), so that we can minimize the down time by only copy the differential one and them mount it up.
so how can it be done ?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
I don't deal with linux so I don't know how robocopy would work there. If you are copying to the Windows computer and it can read the files on the Linux computer, it should work.tks. Rsync's purpose is to cross platform I belive.
Robocopy does what I have wanted.
how you work on what you want? I just want to minimize the copy down time, what is your way to archive that?
By default, it will only copy the newer files. You can have it log the results (so you can see what was not copied and why). There is a switch to monitor the source for changes, but I've not used that yet.
you use one single robocopy command ? please show me the command so that I can test it on my side?
any compress option?
ASKER
robocopy \\server\shared f: /e /mir /z /copyall /DCOPY:T /TEE /r:0 /w:0 /log+:"<name of the log file>"
?
?
Rsync should be fine. It originates from 'nix world, & yes, once you have a full backup, it will only sync what has changed. It is also very fast compared to other tools.
ASKER
Rsync should be fine
I am asking my teammate and it seems rsync on Windows having problem.
but I can test rsync, what is the full command of rsync do what you just said ?
Nature of data.
Nature of storage, if data is on a SAN, using San copy on the LUN level will be a better approach.
You might complete the process faster, but copying data that is in need by using max age option
I.e. Copy data whose max age is six months as an example. This will provide on the second run a speedier way to transition.
Have thoughts been given to use an Archive for data no longer actively used to move it to a new location?
Nature of storage, if data is on a SAN, using San copy on the LUN level will be a better approach.
You might complete the process faster, but copying data that is in need by using max age option
I.e. Copy data whose max age is six months as an example. This will provide on the second run a speedier way to transition.
Have thoughts been given to use an Archive for data no longer actively used to move it to a new location?
ASKER
Have thoughts been given to use an Archive for data no longer actively used to move it to a new location?
you mean move file no longer need first?
the folders is belongs to an application and I don't the application has old data folder / we can archive that
You might complete the process faster, but copying data that is in need by using max age option
robotcopy option ?
robocopy source destiantion /maxage:365 /E /COPY:DAT /log....
Deals with transferring data that is less than one year old.
You would need to rerun it a couple of times including right before you ready to cut over to the new location for the user to make sure to catch the changes of most recently changed files.
if you are archiving, you would use /minage:1440 /mov option to move the data, it will copy first, then delete from the source. with the same option, /copy:dato
does the software you are using manages the data and makes it possible for you to define an archive rule, i.e. files older than 7 years can be discarded as they are no longer needed for regulatory compliance.
in which case you can move four ear old documents to an archive which then has a a rule a file older than 7 years it can be delted from the archive folder.
the /log option is whether you need to record the log of which files were handled in either case.
Deals with transferring data that is less than one year old.
You would need to rerun it a couple of times including right before you ready to cut over to the new location for the user to make sure to catch the changes of most recently changed files.
if you are archiving, you would use /minage:1440 /mov option to move the data, it will copy first, then delete from the source. with the same option, /copy:dato
does the software you are using manages the data and makes it possible for you to define an archive rule, i.e. files older than 7 years can be discarded as they are no longer needed for regulatory compliance.
in which case you can move four ear old documents to an archive which then has a a rule a file older than 7 years it can be delted from the archive folder.
the /log option is whether you need to record the log of which files were handled in either case.
ASKER
Deals with transferring data that is less than one year old.
So it do not copy anything older than one year?
how to let robotcopy only copy files modified?
we are testing this robocopy command:
robocopy <source> <target> /E /MT:16 /Z
its able to continue if transfer interrupted
Also able to copy the updated files only
But not able to delete the same file if the source side deleted !
how to make it really sync the number of files with source path ?
any compression option ?
if you are archiving, you would use /minage:1440 /mov option to move the data, it will copy first, then delete from the source. with the same option, /copy:dato
so this one: robocopy source destiantion /maxage:365 /E /COPY:DAT /log... , should be:
robocopy source destiantion /maxage:365 /E /COPY:DATO /log... ?
does the software you are using manages the data and makes it possible for you to define an archive rule,
not as I know, the software is alfresco
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
tks both,
arnold ,
if I want whatever change toke place, just copy the changed one for me, so /mon: 1 ?
tks, then it can be:
robocopy <source path> <target path> /purge /mon: 1 /e /mir /z /copyall /DCOPY:T /TEE /r:0 /w:0 /log+:"<name of the log file>"
arnold ,
You could add a /mon:30 as an example and have it copy when more than 30 files on the source modified/changed
if I want whatever change toke place, just copy the changed one for me, so /mon: 1 ?
You could use /purge which will delete an item from the destination if it is deleted on the source.
tks, then it can be:
robocopy <source path> <target path> /purge /mon: 1 /e /mir /z /copyall /DCOPY:T /TEE /r:0 /w:0 /log+:"<name of the log file>"
/purge is included in /Mir.
You could monitor for a single change.
Is this an upgrade data copy, or running out of space...
You could monitor for a single change.
Is this an upgrade data copy, or running out of space...
ASKER
/purge is included in /Mir.
so /purge is no need at all ?
so just this is ok:
robocopy <source path> <target path> /mon: 1 /e /mir /z /copyall /DCOPY:T /TEE /r:0 /w:0 /log+:"<name of the log file>"
?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
hi,
tks all.
Hope I am not going to come back on the same topic....
tks all.
Hope I am not going to come back on the same topic....
More than welcome to cone back when needed. Often, an initial though, approach might not have taken other considerations into account.
<opinion>
5 TB is too large for a single database file, if for no reason aside from ... if anything happens to it then the entire database fails. On the other hand, if the database is broken into at least ten individual files, the situation becomes more tenable as to copying it - and a 100 GB file can be transferred "relatively" quickly.
</opinion>