?
Solved

Copying a Directory on a UNIX System

Posted on 2013-12-17
5
Medium Priority
?
654 Views
Last Modified: 2013-12-31
I've always preferred using the tar command instead of the cp command to copy the content of a directory on a UNIX system. That's because I found that using the cp command was unpredictable and unreliable. I know there are those who are fond of cpio. Anyway, I was just wondering whether my misgivings about the cp command are justified.
0
Comment
Question by:babyb00mer
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
5 Comments
 
LVL 19

Expert Comment

by:xterm
ID: 39724519
I only encounter one real issue when using cp vs. tar, and that is that if your source directory has the same name as the destination, it will end up inside the destination directory.

So for jobs that I do repetitively, I prefer:

(cd /source/directory && tar cpf - dir1 dir2 dirX ) | (cd /dest/directory && tar xvfp -)

over:

cp -Rv /source/directory /dest/directory

But to answer your question, misgivings are much the same as preferences which means you don't have to "justify" them per se - you should use what gets the job done and poses the least stress or risk to you.
0
 

Author Comment

by:babyb00mer
ID: 39732588
Hmm. I'm wondering whether tar is faster than cp.
0
 
LVL 19

Expert Comment

by:xterm
ID: 39732592
No, tar isn't any faster - from a system standpoint, they're doing virtually the identical thing.
0
 
LVL 32

Expert Comment

by:phoffric
ID: 39732624
>> unpredictable and unreliable
How is it unpredictable?
How is it unreliable?
Curious - are you using NFS?

For performance considerations, if the cp is over a network and if you have 1000's of files being copied, then transmitting a tar archive, a single file, incurs less overhead than transmitting many files.
0
 
LVL 4

Accepted Solution

by:
Anacreo earned 1000 total points
ID: 39744605
cpio is more reliable than tar in my opinion, file name lengths and directory loops can become issues, hard links as well.

find . | cpio -pdum /var/tmp/target

tar and cpio are old and have numerous limitations that can bite you... a more modern solution is to use pax:
    mkdir /tmp/to
    cd /tmp/from
    pax -rw . /tmp/to


But on any system that has rsync, why not go with the gold standard.
rsync -az -H --delete /path/to/source /path/to/dest
0

Featured Post

VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

When you do backups in the Solaris Operating System, the file system must be inactive. Otherwise, the output may be inconsistent. A file system is inactive when it's unmounted or it's write-locked by the operating system. Although the fssnap utility…
FreeBSD on EC2 FreeBSD (https://www.freebsd.org) is a robust Unix-like operating system that has been around for many years. FreeBSD is available on Amazon EC2 through Amazon Machine Images (AMIs) provided by FreeBSD developer and security office…
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
This video shows how to set up a shell script to accept a positional parameter when called, pass that to a SQL script, accept the output from the statement back and then manipulate it in the Shell.
Suggested Courses
Course of the Month13 days, 6 hours left to enroll

777 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question