Link to home
Start Free TrialLog in
Avatar of crp0499
crp0499Flag for United States of America

asked on

Not getting 10gig speeds on 10gig NICs and switch

I need some help to know where to start digging.

I've got two hosts (ESX) and they both have 10gig NICs.  They plug into a 10gig switch (ubiquiti).  

I'm using Veeam to migrate one VM from host one to host two and the speed is 48 MB/s.  Of course, I am expecting faster.

Now, my Veeam Server is a standalone server with it's own 10gig NIC, going into the same 10gig switch, so I thought maybe the traffic is passing thru the Veeam server and that's slowing it down.  Maybe it's the storage (since Veeam tells me the source is the bottleneck).  Of course, it dawned on me AFTER I started the move that I could have just moved it from one host to the other using vCenter, since the VM isn't changing storage, just moving hosts.  Stupid me...I don't wanna cancel it so I'm letting it run.  

This is also a good exercise for me.  I need to know where I'm weak.

Anyway, Where do I start?  

No jumbo frames turned on by the way.

Thanks
ASKER CERTIFIED SOLUTION
Avatar of Dr. Klahn
Dr. Klahn

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of crp0499

ASKER

Thanks for all of that! Great info. I'll check cabling tomorrow.
You can also of course connect two host machines together using nothing more than a CAT6 CAT6a or CAT7 cable. No switch needed, just plug them NIC to NIC. 10Gb WILL work with CAT6, but is limited to 55M maximum. If you use a short cable, maybe 2M, then it should be just fine. This could be useful as a diagnostic.

To achieve the full throughput, you will need a seriously fast hard drive subsystem, it is unlikely that even a high end RAID system with magnetic drives will be able to keep up with a 10Gb LAN connection, unless you have RAID 10 with a dozen 10K drives at each end or something.
10Gbps cabling and switches and NICS are more often better considered to be elements in a network of many devices where the high speed is helpful not for point-to-point communications but more for sharing the medium.  Most of these machines have buffers.  So data is buffered up and sent / received and then stops in favor of other devices communicating.  The speed potential may be used from time to time but the overall speed won't be the same.  As others have pointed out, it's the other elements of the system that limit average speed.  
Think of shooting bullets.  The mass rate of transfer is high for one bullet but not for a magazine full of bullets.  Maybe a poor analogy but it paints a picture.  Now consider reloading...
Yeah, and of course even if you don't get 10Gb throughput, you might get 2 or 3Gb. For one site I worked on that meant the nightly disk-disk backup window shrunk from a difficult 12 hour max, to a manageable 5 hours.

Even a modern desktop with an SSD, copying from a fairly pedestrian file server will manage much more throughput than 1Gb can provide.
Enable Jumbo Frames and RDMA if possible
use SFP+ connectors (note the plus)
If you want to know how fast the network can go, run a network test like iperf. It isn't impacted by things like the speed of storage, or all of the house keeping that needs to happen to migrate a VM without corrupting things.

You might be able to run it natively withing ESXi, or you can run it from VMs.
https://www.virtuallyghetto.com/2016/03/quick-tip-iperf-now-available-on-esxi.html
I would also check this

They plug into a 10gig switch (ubiquiti).  

I know for a fact some vExperts have returned Ubiquiti Switches to Ubiquiti because the performance was poor! (and Ubiquiti could not fix it!)
Avatar of crp0499

ASKER

Thank you all.  That was a learning experience for me.  TONS of good info in there.  I'll be on-site today to troubleshoot.