Ethernet vs. leased line

Can anyone explain why is the latency lower w/ Ethernet connection as compared to the leased lines? Say we have 10M Ethernet and 5xE1 connectoin, both have the same Bandwdith, but I was told that Ethernet will have the lower latency? I know it has teh higher overhead.
totaramAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Don JohnstonInstructorCommented:
I don't know that you can say that Ethernet has a lower latency than leased lines. Latency is a result of processing delay, queuing and a few other factors. There's nothing inherent in Ethernet which would guarantee a lower latency.
0
Dave BaldwinFixer of ProblemsCommented:
If both signals are coming from the same source, the 5xE1 connection has to be translated to a different signal format which could account for some increased latency on both ends.  I would think the data overhead would be the same... if the signals are coming from and going to the source and destination.  The Ehthernet to 5xE1 translation should be invisible to the data.
0
totaramAuthor Commented:
Yes.. I too was confused when someone told me that. However, Ethernet is a dedicated resource, unlike DSL or cable.
0
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

Don JohnstonInstructorCommented:
Ethernet isn't always a dedicated resource.  For example, if you're doing 802.1q trunking, there could be hundreds of unrelated Ethernet frames (VLANs) sharing a single wire.  And in a provider network, that's not uncommon.
0
Dave BaldwinFixer of ProblemsCommented:
Ethernet is a dedicated resource, unlike DSL or cable.
That makes no sense to me.  I have both DSL and Cable.  Nobody else is on my 'lines'.  Unless you mean you're looking at a point to point Ethernet connection.  I would suspect that Ethernet is handled the same way as DSL and Cable.  The dedicated part is run to the nearest common network port.  Since Ethernet is rated for only 100 meters at full speed, I don't think you are going to ever get a real point to point connection that is farther than that without interface equipment.  Maybe you could get fiber installed point to point... but if you can you have a whole lot more money than anyone I know.
0
Don JohnstonInstructorCommented:
Since Ethernet is rated for only 100 meters at full speed
Huh???
0
Dave BaldwinFixer of ProblemsCommented:
You know very well that the standard 10/100/1000 networks that you can plug into your desktop network card are limited by spec to 100 meters.  Past that you have to add other equipment for it to work.  Even if you could buy it, a 20,000 foot roll of Cat5e is not going to get you a 20,000 foot Ethernet connection.
0
Don JohnstonInstructorCommented:
Since the authors question was comparing Ethernet to leased lines, I assumed this discussion was about provider networks. Which don't use copper for Ethernet. 100 meters is a limitation for copper, not fiber.
0
Dave BaldwinFixer of ProblemsCommented:
And that there is exactly the point.  No matter what he does, it will require interface equipment.  It is not simply a matter of hooking up Ethernet cables.
0
Fred MarshallPrincipalCommented:
You didn't say what sort of connections you're referring to except in terms so general that no sense can be made of it.  Thus the answers you've received.

Ethernet is a *local* connection.
Leased lines are not *local* at all.

Maybe a different question would be appropriate.

For example, Ethernet local connections won't have much latency at all.
Leased lines from one side of the world to the other will have a lot.
So what are you really trying to compare?
0
robocatCommented:
Some service providers sell WAN lines as "ethernet connections" and I'm pretty sure that's the question being asked.

It means that you get a line with an ethernet connector on both sides and more importantly, it is a transparent layer 2 WAN connection supporting broadcast etc...

Because the underlying transport network is likely to be more modern and have less overhead (e.g. ethernet over MPLS or even pure ethernet), latency can be lower than older SDH/ATM-based networks. But that really depends on the underlying infrastructure of the provider.
0
Fred MarshallPrincipalCommented:
I agree with robocat that service providers do/say what's described.  So, in that context then "Ethernet" means something quite different than what I was referring to.  

Consider this:

Ethernet is good for 100 meters because of the signaling mechanism.  
So, ever wonder how ISPs and others can run their networks over miles of microwave links, etc?
It's because "It's not Ethernet anymore Toto" (it just looks like it at the ends for basic networking purposes - but the characteristics are different).
So, with an MPLS or other WAN line, there are perforce going to be other signaling protocols involved and it's impossible to say with any precision what sort of latency may be involved.  A measurement is likely the best.  Or, perhaps you could find data for various types of such lines.  I've not looked.

When I mentioned "half way around the world", I was serious.  Intercontinental interoffice links suffer from latency due to the distance involved.  There are products intended to help deal with it.
0
totaramAuthor Commented:
I don't think I clarified by question enough, what I wanted to know was that on a WAN connection does the low bandwidth like E1/T1 has increased RTD as utilization increases as compared to Ethernet? Does the capacity or fullness affect the RTD on the traffic?

Thanks;
0
Don JohnstonInstructorCommented:
on a WAN connection does the low bandwidth like E1/T1 has increased RTD as utilization increases as compared to Ethernet?
Assuming no oversubscription, there shouldn't be any difference in delay. But provider equipment, processing and queuing delay could affect the outcome.

The bottom line is that there is nothing about Ethernet vs. T1/E1 that would impose a significant difference in delay on one but not the other.
0
robocatCommented:
It all depends on the underlying infrastructure of the provider and how is it dimensioned.

Any WAN connection will show increased latency if you're getting near your maximum bandwidth. And there's no reason why ethernet should do better or worse than a leased line, given the same access bandwidth and well dimensioned networks.

However, if your provider says that he can deliver an ethernet WAN connection that has lower latency than a similar leased line, you'd better believe it. As I said before, the underlying transport network is likely to be more modern and have less overhead (e.g. ethernet over MPLS or even pure ethernet).
0
Fred MarshallPrincipalCommented:
Once more, one has to be careful what they mean in this context by "Ethernet".  
I don't see a one-to-one correspondence between T1/E1 and actual end-to-end Ethernet.
So, I would conclude, if that remains the basis of comparison (and I realize that it's probably NOT), then I would expect differences in latency from the very beginning.

Now, if the question is, as it appears to be:
"Will the latency (whatever it starts out to be) increase with increased traffic levels *differently*,  i.e. on a fractional basis, then I will assert that the answer is "yes".  But by how much for each?  That's way harder to answer.

And, if the use of the term "Ethernet" is as I described for MPLS earlier, then the comparison just becomes relatively impossible as there are too many varieties of that kind of potential "Ethernet"  to even try to deal with.
0
totaramAuthor Commented:
Hi Fred;
Why do you think that latency would increase on higher or increased traffic? Any reasons...
0
Fred MarshallPrincipalCommented:
I think that a general discussion is probably best here:

Consider a communication link with multiple "hops" as is typical of just about *any* communication link (so I don't just mean "hops" as in IP addresses in an internet traceroute).
Here's a very simple model: each device that packets pass through will have a buffer or two and some amount of processing.
- Packets arrive and land in an input buffer.
- Packets are processed in the cpu.
- Packets land in an output buffer.
In the best case, the buffers won't overflow so they won't affect the throughput.  

Consider low-loading operation:
- a packet arrives in the input buffer.
- it is immediately processed in the cpu
- it ends up in the output buffer for retransmission.
The latency introduced is the combined time it takes to do all these things.
And, of course, in a real device, the cpu is likely involved in ALL these operations.
And, if there is a communication protocol change in the device, then the cpu work will be greater and the simple model may not be very descriptive.

Consider a bit higher level of loading:
- packets arrive in the input buffer.
- they are processed in the cpu in some order so have to wait for their turn to be processed.
-  packets ends up in the output buffer for retransmission.
So, depending on the cpu power, there will be latency introduced due to the throughput load because packets have to wait in the input buffer for some average time which translates into latency.

Consider a yet higher level of loading:
- packets arrive in the input buffer.
- they are processed in the cpu in some order so have to wait for their turn to be processed.
BUT, if the cpu can't keep up with the input rate then some packets will be dropped from the input buffer.
OR, even if the cpu can keep up with the input rate then the output buffer may overflow and packets will be dropped from the output buffer.
When packets are dropped then maybe there will be retransmission from the source and this, you can imagine, will greatly increase the latency if it's prevalent.

So, this is a very general description of the sort of thing that can happen at higher or increased traffic.  Whether it happens or not or is very noticeable or not is going to be system specific.
0
robocatCommented:
>Why do you think that latency would increase on higher or increased traffic? Any reasons...

Think of your leased line (or ethernet connection) as a highway. When there's not much traffic, all cars will drive at a certain speed. At each entrance ramp, the cars can join the other traffic smoothly and without delay.

Imagine traffic increases a lot. Now cars won't be able to take the entrance as smoothly and probably have to slow down to join traffic. Imagine even more traffic, and you will get more delays because there are cars in front of you who are also trying to enter the highway.

It is much the same in Network World. Even to the extent that packets get discarded (imagine cars being pushed into the ditch :-) ) when the lines get saturated, causing timeouts and retransmissions. This can make latency go through the roof.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Networking Protocols

From novice to tech pro — start learning today.