Measure actual throughput on a FE or Gigabit LAN connection

I have been looking into an issue with a customers LAN investigating slow throughput.

A couple of things I have learned and have confused me on how to determine the actual speed of a connection between a PC and its switch/router...


When I throttle the port that a PC is connected, to 512Kb/s using QOS,  the PC still says the link speed is 1Gb/s


When I set the the port speed to say 10Mb/s Windows says it is connected to a 10Mb/s link.


The difference suggests Windows or the NIC, asks the Switch for the link speed and doesn't actually measure the line speed.
Why am i looking at this?

I have always been curious about the impact poorly deployed cabling has on actual LAN performance.  If for example a cable ran past a device that induced some sort of interference that slowed traffic across the cable, then Windows may be oblivious to it, and all the testing at switch and device level may not highlight the fact that the cable run from the outlet to the patch panel is the problem.

Your collective thoughts/knowledge and understanding apprecaited
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

When the switch an PC connect, they negotiate the link speed.  That is what the PC is displaying.  When you throttle the speed down at the switch with QoS, it still has the full link speed, it's just not allowed to maintain throughput at that speed.  I would assume that even with it throttled, packets travel in the standard 1G mode at full speed.  It is just limited as to how much of the time it can use the link.

When you change the port speed on the switch or PC, the actual link speed changes.

Interference on the cable won't technically "slow" down the traffic.  It could induce errors which would cause packets to be retransmitted, which will lower throughput.

If you want to measure the throughput, a free tool at this site was recommended to me (I've not tried it yet):
I would hard set both sides of the Interface (Server & Switch) to 1GB Full-DUPLEX.
Then I would clear the interface counter on the switch and then push traffic through it. While pushing traffic through it, monitor the usage using something like PRTG.  

You can also take the amount of traffic that is passing through the interface and calculate it's bandwidth usage using your txload and rxload.

a cable ran past a device that induced some sort of interference that slowed traffic across the cable

Yeah...that doesn't happen.  Everything still runs at the same's just filled with so much interference that the effective bandwidth is degraded.  Can't move enough good packets to sustain a certain level of traffic.

Over copper cable, 100m, it's slower (2 x 10^8 m/s) than the speed of light (3x10^8 m/s) ...but a 1/2-microsecond propagation delay should not be increased because of electrical interfererence.  IIRC, inductance will cause a voltage drop, obscuring the signal from receivers expecting signal traffic within a narrow range.

Using policy to limit bandwidth just takes away time slices from the port.  Make it 50% of 1Gbps port, and every other timing interval will leave the port inactive.

By setting (or negotiating) the connection at 10Mbps, FastEthernet, or Gigabit Ethernet, you've set a maximum bandwidth and established how signals will be transmitted.  There are an unending number of variables, with the end result that you will _never_ get 100% of that speed.

With some packet tests, you can get close enough to be happy.

995Mbps on a 1Gbps direct internet access circuit was far more than I ever expected.  It actually crashed the NIC on my laptop.

10MB/sec on a 100Mbps WAN link...that's fine too.

But, none of those tests were just to the local switch.  You have switches, routers, plus the end nodes running test software to do the measurements.

If you want to eliminate the switch, then set up two computers/laptops at either end of the cable run.  Run iperf as a server (configure it for logging to a text file).  Most laptops and modern NICs will auto-crossover.  You should be able to get 11-12MB/sec on a 100Mbps connection.  100-120MB/sec on a gigabit connection.  You can also make the units Mbps in the logging to simplify conversions.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
mbkitmgrAuthor Commented:
Aleghart, awesome answer and is exactly the type of information I was after.  

In my early IT career i worked in a 2000MW Power Station.  At that era my first project was to project manage the deployment of Cat3 Shielded, fiber and phone lines around the site.  During testing by the contractors who deployed the various mediums, I got to see the effect some interference had on the un-shielded Cat 3 that was being replaced (the test tool looked like an oscilloscope).

IIRC, inductance will cause a voltage drop, obscuring the signal from receivers expecting signal traffic within a narrow range.

Does this constitute what is termed dropped packets?
Dropped packets usually occur due to network congestion and queues that are full.  If there is not sufficient space to queue new packets, since the queue is FIFO, the newest packets are discarded.  Once the queue has room, it will accept packets again.

Induced current can bring the signal voltage out of tolerance.  On a twisted pair, interference should be near identical on both wires, since the close twist will equally expose each wite to the same source if interference.

For example, an NRZI-encoded signal expects a swing from +1VDC to -1V to indicate a binary unit of '1'.  The next timing cycle, the voltage stays at -1V.  This indicates binary '0'.  (NRZI means non-return to zero, inverted.  The signal should never be at zero.)

+1  = 1
+1 = 0 (no change)
-1 = 1 (change)

In MLT-3 encoding, there are theree states: +1, 0, and -1.

If the induced current adds 50mV, the twisted pair gives a differential that cancels out the added voltage:

EXAMPLE 1 - no induced voltage
D1 = +1.0V
D2 = -1.0V
D1 - D2 = +2.0V

EXAMPLE 2 - induced voltage = +100mV
D1 = +1 +0.10V = +1.1V
D2 = -1V +0.10V = -0.9 V
D1 - D2 = +2.0V

EXAMPLE 3 - induced voltage = +700mV
D1 = +1 +0.70V = +1.7V
D2 = -1V +0.70V = -0.3 V
D1 - D2 = +2.0V

Looks all the same, right?  But, in Example 3, the D2 arrives at -0.3V.  That's pretty close to zero.  How does the receiver know that that is actually a signal, especially if the tolerence might be +/- 0.4V ?

These are made-up numbers, but you get the idea.  Induced current is additive on the length of exposed wire.  Crossed wires at 90-degrees are less voltage change than the same cables run in parallel for 5-10 feet.
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Network Management

From novice to tech pro — start learning today.