ftp timeout

Posted on 2006-05-03
Medium Priority
Last Modified: 2008-01-09
I am trying to ftp a file from one in-house Linux computer to another. The file is 15GB and it keeps timing out. How can I prevent this? I looked at: http://www.experts-exchange.com/Operating_Systems/Linux/Q_11571759.html?query=ftp+timeout&topics=32 which suggested using 'idle nnnn' at the ftp> prompt where 'nnn' is the number of seconds. I tried this and all I get is "'SITE IDLE ' not understood". Another suggestion was to set various timeout settings in /etc/proftpd.conf, but I see none of these settings in my .conf file.

How do I do this? the ftp computer is Linux version 2.4.31 (Slackware). I see no version options for ftp and running it doesn't display anything. My ftpd computer is Linux version 2.4.29 (Slackware) running ProFTPD 1.2.10.
Question by:jmarkfoley
  • 3
  • 2
  • 2
  • +2
LVL 51

Expert Comment

ID: 16603274
silly question: can the destionation store such huge files?

Author Comment

ID: 16603314
Silly answer. Would I be posting the question if not?

Expert Comment

ID: 16603342
Have you checked the log of the ftpd server? Perhaps there are some useful messages there.
Also, like ahoffmann suggested :) , a 15GB file is huge and problems can arise if software is not compiled with large file support ( the "standard", non-LFS   maximum file size is 2^32 bytes, 2GB). Try splitting the file into pieces < 2GB and see if it helps.
What is SQL Server and how does it work?

The purpose of this paper is to provide you background on SQL Server. It’s your self-study guide for learning fundamentals. It includes both the history of SQL and its technical basics. Concepts and definitions will form the solid foundation of your future DBA expertise.


Author Comment

ID: 16610519
OK, I guess I'm going to have to explain what I'm doing so you guys will believe I'm serious.

I have to copy 17 DD3 tapes containing between .5 and 17GB of compress tar backups to a hard drive to give to an "expert" for a legal case. The expert does not have a DAT 40 drive, nor does the defendant want to pay the expert for the 1/2 to 2 hours it takes to restore each tape. So, I'm copying all the tar files from tape to a 160GB disk. In fact, the files copy fine and all just fit on the disk. All but 3 are readable and extractable by tar. these 3 tapes had bad reads partway through the tape. So, I tried one of these tapes in a newer drive on a different machine and it read the tape and copied OK. Now I need to get those files back to the machine with the rest of the tar files. Hence my choice of ftp. Failing at that, I have successfully copied the one file back to a new tape on the new drive and then restored that tape on the target machine using the older drive, but this becomes a 3+ hour process (copy old tape to machine A, copy to new tape on machine A, copy new tape to machine B). I haven't tried nfs and I really don't want to.

No, I can't break the files up into smaller pieces because I'd have to cat them back together again and there definitely isn't enough space on the drive to concurrently hold 15GB worth of segments plus another 15GB reassembled. Nor is there even a spare 15BG on the 'new' machine to break up the good file in the first place.

I do not see anything in the proftpd.log indicating failure. On the client side I get (this is a smaller file, only 6GB):

ftp> dir *.tar
200 PORT command successful
150 Opening ASCII mode data connection for file list
-rw-rw-r--   1 root     root     6372567040 May  4 21:10 server12003Q2.tar
226 Transfer complete.
ftp> get server12003Q2.tar
local: server12003Q2.tar remote: server12003Q2.tar
200 PORT command successful
150 Opening BINARY mode data connection for server12003Q2.tar (6372567040 bytes)
server12003Q2.tar: short write
450 Transfer aborted. Link to file server lost.
2147482440 bytes received in 239 secs (8.8e+03 Kbytes/sec)
ftp> quit

It did transfer about 2GB. So, am I looking at an ftp limitation? Why would ftp care? I was certainly able to create large files directly from the tape, so I know Linux must support this.

Thanks, Mr. Dead Serious
LVL 51

Accepted Solution

ahoffmann earned 1500 total points
ID: 16610573
> I do not see anything in the proftpd.log indicating failure.
as explained befor: it could be that your ftpd or ftp is not 2GB-aware

I'd split the huge file into smaller peaces, then concatenate them on the destinate again.
On the destination file system you only need additionl space for one split part.
Assuming you created small peaces using split, then concatenate them like sh syntax):

for f in fileaa fileab fileac ...your othet files here .... ; do
  cat $f >> orig-file && rm $f

Expert Comment

ID: 16612268
> It did transfer about 2GB. So, am I looking at an ftp limitation? Why would ftp care? I was certainly able to create large files directly from the tape, so I
> know Linux must support this.

The kernel might support large files, but if your application wasn't compiled to support this, it will fail.

You could get this ftp client which has large file support and try with it: http://lftp.yar.ru/  , or you can try a new version of curl or wget which might have LFS enabled (didn't check), or  you could do the split thing which could be faster if you have enough space to do the split

Expert Comment

ID: 16617593
If this were me and I was in a hurry, I would pull the disk from the machine that already has most of the data and pop it into the newer machine that has remaining data. After the drive was in place I would mount it and use 'dd' to transfer the files between the two drives. Forget move and copy as they don't work well across the bus for really large files.

Expert Comment

ID: 16675382
Hi jmarkfoley,

I don't think that you found a ftp limitation, because I have already transfered files near to the size you reported and I didn't have problems, but as some said above, there are somethings that you need to do in order this to work.

You said that you are facing a Time Out error, and first I suggest you to execute the following ftp command before getting the file, so your terminal will always have something: "hash".

If the above does not work, I would suggest you to split the file in small pieces, and as you were able to ftp files of 6Gb, you can execute the following split:
split -b 5000m <file>
The command above will split the file in files of 5Gb.

To put all of them together, you can do the following:
ls -1 x* | while read file; do cat ${file} >> ${newfile}; done

If you don't want the solution above, you can try to use the "scp" command, check the link below:

I hope it helps you. =0)

Expert Comment

ID: 16688211
humm... :-/

Featured Post

Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This article will explain how to establish a SSH connection to Ubuntu through the firewall and using a different port other then 22. I have set up a Ubuntu virtual machine in Virtualbox and I am running a Windows 7 workstation. From the Ubuntu vi…
The purpose of this article is to show how we can create Linux Mint virtual machine using Oracle Virtual Box. To install Linux Mint we have to download the ISO file from its website i.e. http://www.linuxmint.com. Once you open the link you will see …
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.
How to Install VMware Tools in Red Hat Enterprise Linux 6.4 (RHEL 6.4) Step-by-Step Tutorial
Suggested Courses
Course of the Month15 days, 8 hours left to enroll

850 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question