Does perl copy better than linux copy command?

Does anyone know if the Perl copy better than Linux copy command? I mean if  the copy time would be better than Linux copy.
c11v11Asked:
Who is Participating?

Improve company productivity with a Business Account.Sign Up

x
 
Gregory MillerConnect With a Mentor General ManagerCommented:
Have you considered scripting something on the remote box to tar.gz the files into one large file and then transfer it to your archive point? This would be faster by far as the large file gets many more efficiencies during the transfer plus you have compressed them for a smaller size. There is a lot of overhead when transferring 10000 1 byte files versus 1 10000 byte file that might compress to 8000 bytes.

To answer your question, yes, rsync will do what you want.
0
 
Gregory MillerGeneral ManagerCommented:
what are you trying to do and what size and quantity of files are you trying to copy?

Have you tried rsync? Very fast...
0
 
gheistCommented:
there is no copy command on linux....
Especially on Linux (not on old UNIX) |cp| command can take enormous number of arguments, so there is not much efficiency gained by using perl, or compiling C program to delete ton of files.
0
Build your data science skills into a career

Are you ready to take your data science career to the next step, or break into data science? With Springboard’s Data Science Career Track, you’ll master data science topics, have personalized career guidance, weekly calls with a data science expert, and a job guarantee.

 
c11v11Author Commented:
we have hundreds thousands files of small files ( average 16K) to copy from one NFS directory to another NFS directory on one same server. Actually it is for archive purpose.  Does the rsync work for this situation ? Does rsync support multiple process to handle the file copy?
0
 
gheistCommented:
To copy lots of files from nfs use actimeo=3600 mount option on Linux
0
 
Duncan RoeSoftware DeveloperCommented:
In my experience, tar without gzip is faster than with it (but you must have room for the big tar file at both ends) - the gzip takes longer than the time difference to transfer compressed / uncompressed.
So I would say tar without gzip on the systems where the files actually reside and will reside is your best shot. Don't even think of running tar anywhere else.
If you don't go the tar route, check man 5 nfs to understand what gheist's suggested actimeo=3600 mount option does: pretty neat for the copy but you don't want to leave it there.
0
 
gheistCommented:
tar p keeps permissions (or cpio or pax) down to numeric uid
rsync would match usernames and try to get better permissions

no need for temporary files;)

(cd src ; tar cpf - .) | ssh dst '(cd dst ;tar xpf -)
0
 
c11v11Author Commented:
I don't have a dst server.  source directory and dest directory are both mount at NFS side.
0
 
gheistCommented:
remove 'ssh' from command line....

best if source and destination are different mounts and you can set source with actimeo=3600 (or more) and each can use differnet thread for processing in client side.

Much more optimal would be having source mounted from destination and copied over that shortcut. (ask admin of the NFS if they can help...)
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.