Bandwidth and insertion delay: definitions needed

Ah hello.

I have been advised in person by a trustworthy source that the "bandwidth" of a device, when expressed in kb/sec, is the number of bits that the device is capable of writing to the wire in a second.  It was also stated that this is synonymous with "insertion delay".

Now, this makes sense, but since I cannot find anything on the net using this same terminology, I am unsure.

1) Is this the correct definition of "bandwidth"?
2) Is "insertion delay" the same as bandwidth?  Again, there doesn't seem to be anything confirming either way.

TIA
LVL 19
mrwad99Asked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

David Johnson, CD, MVPOwnerCommented:
bandwidth and insertion delay are 2 different items of which bandwidth MAY be a factor
for instance you change the input to a flip flop and there is a delay before the output changes.
clock insertion delay is the delay from the clock definition point to clock pin.due to propagation time.
http://bit.ly/17Tm0V7
0
gheistCommented:
Insertion delay is usually called latency outside pornographic circles.

Normally packets to be sent out to some interface are arranged into sort of queue, then they get out at interface with transmit delay applying. If the queue is too big or there are too many queues one calls it bufferbloat.
0
mrwad99Author Commented:
Er, ok...

So it seems that bandwidth is not insertion delay, but I'm still unsure on bandwidth:

All of this has come about as a result of http://en.wikipedia.org/wiki/Bandwidth-delay_product:

"The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged."


So if I write bits out at a speed of 512kb/sec, then the maximum amount of data that can possibly be unacknowledged is indeed (the rate at which we can send bits out) * (how long it takes us to get back our first ACK).  This makes sense, but it is never worded as such when looking up "bandwidth".

Can someone please clarify??
0
Determine the Perfect Price for Your IT Services

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden with our free interactive tool and use it to determine the right price for your IT services. Download your free eBook now!

gheistCommented:
It changes greatly from your description in October 1989, RFC1122
0
mrwad99Author Commented:
?

What I quoted from Wikipedia is wrong - are you saying that?  I'm sorry but I don't know what you are getting at with your last comment...
0
gheistCommented:
Depends on what is the definition of network circuit. Take a good read in sources other than wikipedia
BDP is commonly ping delay x bandwidth. like .3s across atlantic
0
mrwad99Author Commented:
David, do you have anything else to add to this?
0
gheistCommented:
e.g. with UDP or Ethernet you will not receive any acknowledgement, with wifi,dsl or TCP you will get them even re-transmitted in worst cases....
0
mrwad99Author Commented:
OK, so I have my definition of insertion delay, but I'm still not sure on bandwidth...
0
David Johnson, CD, MVPOwnerCommented:
So if I write bits out at a speed of 512kb/sec, then the maximum amount of data that can possibly be unacknowledged is indeed (the rate at which we can send bits out) * (how long it takes us to get back our first ACK).  This makes sense, but it is never worded as such when looking up "bandwidth".

There are many layers to tcp/ip your application writes to the application layer (4), but at the internet layer (2) each ip address in the chain has a receive buffer and informs the sender that it can receive X bytes when the transmitter receives that 0 bytes are available it will send keep alive packets until it receives either no response and times-out or it is told how many bytes are available to be received.

in this case the insertion delay is the time from when you generate the data to be sent and the time it is acted upon by the receiver.( Especially a concern to high frequency traders in the stock exchanges)  

Packets travel close to the speed of light (fibre optics is roughly 70% of the speed of light) BUT there are several links in the chain from the transmitter to the receiver that their bandwidth will cause delays.  i.e. if any of the links receive buffer is full then the rest of the chain will be on hold.. We also have to take into consideration layer 1 the network access layer which will also play a part as it also has a maximum bandwidth and receive and transmit buffers. And there is the actual medium that the packets travel on i.e. the physical cable maximum speed properties before packets get degraded or corrupted.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
gheistCommented:
There are many congestion notification mechanisms that tell you to slow down.
Still most effective and always implemented is slowing down if some packets are lost on TCP path.
(google for "new reno" for details)

Another almost omnipresent is ethernet flow control. But it is sometimes not to the point.
Say you have all office wired for gigabit and 100Mbps internet router. Once router starts to send flow control messages anybody talking to that port will get slowed down, no matter they are with 99,9% traffic going to file server... That is slowed down too...
0
mrwad99Author Commented:
OK I'm not sure that insertion delay is

the time from when you generate the data to be sent and the time it is acted upon by the receiver

...surely that is half the RTT plus the latency in the receiving TCP stack in getting the data up to the application layer?  I'm gonna do some more digging/speaking to people, will post back anything interesting.  Either way I'll share the points here for effort made.
0
mrwad99Author Commented:
Thanks both.  It seems that these terms are not easily defined, and I guess the answers may depend on what the expertise of those asked is; an electronics expert will most likely give a varying answer to a higher level network specialist.
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Networking

From novice to tech pro — start learning today.