Slow file transfer

I am transferring a 1gb file using FTP and the link is 1gb but it is taking over 5 minutes to transfer.

In wireshark I opened the expert infos composite window.

In the experts infos:

I have no errors

I have these warnings:

Window is full   20

Zero Window   17

ACKed lost segment (common at capture start)  63

Previous segment los (common st caputre start)  78

Out-Of-Order segment   187

Fast retransmission (suspected)  14

This was on a 1gb ftp file transfer

The IO graph Y axis shows 50000 when set to auto and the graph varies around 30000

In the notes I have Duplicate ACK (#41) for FTP an FTP data packet

I am trying to narrow down to network or server problem.
Dragon0x40Asked:
Who is Participating?
 
nociSoftware EngineerCommented:
1GB [Byte] + transfer overhead = ~10Gb [bit]
10Gb / 300 s => ~33Mbps  = ~33000Kbps.

If that is slow or not is to be seen. In the equation is also DISK READ at the source, DISK WRITE at the target.
i.e. IMHO not too bad.

If you want to test you need testing tools, those generate traffic without O/S overhead, disk overhead etc.
Prefereably with a range of packet size to excersize various parts of the network between two systems.

NETIO is a nice tool that can handle traffic on Unix, Linux, OpenVMS, Windows....
 http://www.ars.de/ars/ars.nsf/docs/netio

0
 
Steve JenningsIT ManagerCommented:
Google "iperf" and you can use that little client / server apo to tune buffers

Good luck,
Steve
0
 
giltjrCommented:
Are you 100% sure that the full path is a 1Gbps path.  That is, both the client, the server, and all network connections in between.

Are they both on the same subnet or different subnets?

If different subnets, what type of router are you using?

What is the link utilization on all of the links between the client and the server?

0
Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

 
Gerald ConnollyCommented:
You are unlikey to get a single stream FTP service to run at anything like 1Gigabit and as @noci pointed out you seem to be mixing your bits and bytes, Its a de facto standard that "b" is bits and "B" is bytes.

so a 1Gbit/sec link is really a 100MByte/sec pipe and you will struggle to get even 50% of that single stream although i think you should get better than 33Mb/s (4MBytes/sec).
Do you have jumbo frames switched on and what is it writing to? a single disk or a RAIDset?
0
 
nociSoftware EngineerCommented:
And if there are 100Mbps components are in the loop then are they auto configure port connected to a  non-negotiate port?
In that case you may end up with  HALF/FULL DUPLEX ....
0
 
Dragon0x40Author Commented:
Sorry about mixing up bytes and bits. bytes are used for disk capacity and files size and bits are used for data transfer.

I think a 1gb/per second link would 125mB/per second? bytes = bits/8 ?

We have transferred the same file to different end points through a layer 2 cloud and this particular location takes about twice or three times as long to transfer the same file as compared to all the other end point locations.
0
 
Gerald ConnollyCommented:
1Gb/s = 125MB/s, yes you might think that.

But whenever data gets put onto a comms link you have to account for all the extra protocol which usually works out at 10 bits per byte.
So it is generally accepted that 1Gb/s = 100MB/s


So have you checked (and double-checked) that all the sectors of the link are running at 1Gb/s?
Try other cables.
Use link test software to measure the end-2-end link speed (removes target node disk write overhead)
0
 
nociSoftware EngineerCommented:
Additional, the 10 bits per byte ratio only is for relatively large packets.
There is also various delays in drivers, disks, applications ... which slowdown your connections because they add up when applied serially,
There isn't a lot of software that copies data in parallel for one big file.
Also all packets need to be ack'ed after a while so if the receiver  side is slow, the sender will slow down too.
0
 
nociSoftware EngineerCommented:
I forgot to mention that you assume there is NO other network activity on the same path you use, and also no other  disk activity at BOTH sender & receiver, and that there is sufficient CPU compute time available  on all equipment involved.
0
 
Gerald ConnollyCommented:
@noci i agree, its just simpler to do the 10 bits per byte

and good point on network activity!
0
 
nociSoftware EngineerCommented:
@connollyg,  File transfers mostly are, but telnet traffic has a dramatic overhead (32byte of IP packet + 14 bytes of ethernet  for 1 or 2 bytes of data).


Also the bandwidth of disks can be very low. Depending on USB1.0, USB2.0 or USB3.0 / SATA / SCSI1, SCSI2, SCSI3..
On the writing end there is also the overhead of allocating additional data blocks as the transfer progresses, which can be heavy on fragmented disks...

Any way filetransfers are a not very well adjusted instrument for network performance measurement.
0
 
Gerald ConnollyCommented:
@noci whatever the overhead he should be getting better than 33Mb/sec down a 1Gb/sec link

@noci re disk speeds that why i recomended using a network link test to remove the disk write element from the equation.
0
 
nociSoftware EngineerCommented:
@connollyg, see netio in the first answer....
0
 
giltjrCommented:
Assuming you are running Windows, from one of the computers you are testing with can you provide the output from the command:

     pathping -n x.x.x.x

where x.x.x.x is the IP address of the of the other computer you are testing with?
0
 
rochey2009Commented:
Hi,

Wireshark has suggested that you have a TCP windowing problem. This means that the sender cannot send anymore data until it receives a TCP acknowledgement. Are you using TCP windows scaling?
0
 
nociSoftware EngineerCommented:
@rochey2009, that is also consistent with slowness in the target disks, filesystem overhead, file growth...

So besides network stuff you need info about the disks, bandwidth, seek times, fragmentation....
0
 
Dragon0x40Author Commented:
I think this may be a routing issue. With extra hops thru the cloud. But great answers! I will post when I find out.
0
 
Dragon0x40Author Commented:
The next hop in the routing table was incorrect and was sending traffice across the mpls cloud twice.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.