Link to home
Start Free TrialLog in
Avatar of scooter41
scooter41

asked on

Copying Multiple Directories, then verifying the transfer with Perl ?

Okay, heres the prob I have, I am just not sure if its possible.

I want a user to log on to their intranet "back up page"

then specify which directory they want to back up on their part of the server. Then the script will need to transfer that directory (unfortunately including subdirs and files) to another location, then if the data transfer was successful then delete the dirs/files that were backed up.

This is on a network, where the server has full rights/permissions. But I dont think that perl can transfer multiple subdirs ? so I was thinking along the lines of the perl script calling maybe a batch file in order to copy information (but then I remembered that perl cant talk to batch files, or cant it), it was these point that I thought sheesh,

lets ask the experts :)
Avatar of bebonham
bebonham

perl can do what you want, or you can use the system() or ` operator to run any batch file or commands straight from the perl.

if this is win32 and you don't want to get to technical,

just get the root directory to backup
and do
`xcopy $dirname newdir`;

if you want to be more technical, you can open each file and write it to a new location.

but I think at this point I would just use the commands made available by the os.

Bob
Avatar of scooter41

ASKER

okay bob, thats brilliant, thanks for your help so far, but just one thing still remains, how do i check that the copy has been 100% error free before deleting ? the data will be important docs and is vital that it cannot be deleted unless backed up error free.

Do you think i could just take the properties of the two dirs for example and compare them ?
just compare files' sizes.
ASKER CERTIFIED SOLUTION
Avatar of bebonham
bebonham

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks Bob !
if course getting the file size is easier with -s

die ("bla...!\n") if ( (-s $newfile) != (-s $oldfile) );
thanks shlomoy!

I just learned stat :)

funny how in perl, seems like everybody is good at different stuff...

just wish I new the basics a little better.

especially the mysterious (to me) pipe and fork.

Bob
to learn more about pipe and fork read:

       pipe READHANDLE,WRITEHANDLE
               Opens a pair of connected pipes like the
               corresponding system call.  Note that if you set
               up a loop of piped processes, deadlock can occur
               unless you are very careful.  In addition, note
               that Perl's pipes use stdio buffering, so you may
               need to set `$|' to flush your WRITEHANDLE after
               each command, depending on the application.

               See the IPC::Open2 manpage, the IPC::Open3
               manpage, and the Bidirectional Communication entry
               in the perlipc manpage for examples of such
               things.

               On systems that support a close-on-exec flag on
               files, the flag will be set for the newly opened
               file descriptors as determined by the value of
               $^F.  See the section on "$^F" in the perlvar
               manpage.          











       fork    Does a fork(2) system call to create a new process
               running the same program at the same point.  It
               returns the child pid to the parent process, `0'
               to the child process, or `undef' if the fork is
               unsuccessful.  File descriptors (and sometimes
               locks on those descriptors) are shared, while
               everything else is copied.  On most systems
               supporting fork(), great care has gone into making
               it extremely efficient (for example, using copy-
               on-write technology on data pages), making it the
               dominant paradigm for multitasking over the last
               few decades.

               Beginning with v5.6.0, Perl will attempt to flush
               all files opened for output before forking the
               child process, but this may not be supported on
               some platforms (see the perlport manpage).  To be
               safe, you may need to set `$|' ($AUTOFLUSH in
               English) or call the `autoflush()' method of
               `IO::Handle' on any open handles in order to avoid
               duplicate output.

               If you `fork' without ever waiting on your
               children, you will accumulate zombies.  On some
               systems, you can avoid this by setting
               `$SIG{CHLD}' to `"IGNORE"'.  See also the perlipc
               manpage for more examples of forking and reaping
               moribund children.

               Note that if your forked child inherits system
               file descriptors like STDIN and STDOUT that are
               actually connected by a pipe or socket, even if
               you exit, then the remote server (such as, say, a  
               CGI script or a backgrounded job launched from a
               remote shell) won't think you're done.  You should
               reopen those to /dev/null if it's any issue.                                        
these man pages will also be useful:

perldoc perlipc
perldoc perlfork
perldoc -f open



(you can also look those up in www.perldoc.com)