I'm trying to transfer large files (100+ gigabyte db backups in this case) between our Windows servers in the US and the UK. I'm running into terrible transfer speeds regardless of the protocol I use, whether it be regular Windows copy (smb) or scp. This is a very high bandwidth link with about 100ms of latency. These are Windows 2008 servers in the US and 2008R2 servers in the UK.
My best consistent transfer speed so far has been about 512 kbytes/sec, which is terrible. Using Linux-to-Linux transfers from servers in these networks (using scp), the transfer rates can get as high as 35 mbytes/sec. Linux-to-Windows transfers suffer the same as Windows-to-Windows transfers.
I'm starting to exhaust my ideas, so I'm desperately hoping you folks here might have some new ones for me. I've tried:
Lowering the MTU as low as 1300 (to account for any possible vpn overhead, issues, etc).
netsh interface tcp set global autotuninglevel=disabled
netsh interface tcp set global rss=disabled
HKLM\system\CurrentControlSet\Services\lanmanworkstation\parameters\DisableBandwidthThrottling = 1
HKLM\System\CurrentControlSet\Services\Tcpip\Parameters\EnableWsd = 0
TeraCopy (which is surprisingly consistently the fastest option)
scp in Cygwin
pscp (which is consistently faster than Cygwin's scp, but still slow)
Does anyone have any further recommendations? We need to transfer hundreds of gigabytes of data between these locations, and right now we have to use a roundabout method:
Copy from US Windows server to US Linux server in the same network.
Copy from US Linux server to UK Linux server.
Copy from UK Linux server to UK Windows server in the same network.
Surely there is some sort of tuning in Windows 2008 that causes these issues over high latency connections.