[Last Call] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 485
  • Last Modified:

SLA Planning Math

I'm trying to plan an SLA, and I want to verify my math is right in terms of just straight Ethernet latency.

Assuming we want to be able to transfer an average of 58,783 bytes (~57k) over TCP, and receive the full response in 200ms:

For a single ethernet frame to arrive at a destination, it should take around .3ms on an unloaded network (seems to be the generally accepted value for base Ethernet latency). Assuming a 1500 MTU, a TCP packet with a fully scaled window should be able to hold 1458 bytes. Chopping up that response into TCP packets, it should take 41 packets in one direction (58783/1458), multiply by 2 to account for acknowledgements which becomes 82 total packets exchanged (ignoring handshake, assume persistent connections and fully scaled windows).

On an unloaded network, the full transfer should take 24.6ms (82 * .3). Well under our 200ms maximum. So far so good. Now for concurrency and bandwidth:

Assuming this happens on a gigabit ethernet link, it should be able to transfer 134,217,728 bytes in one direction per second, or 40,265 bytes every .3ms (the ethernet latency period). Each .3ms timeslice can hold 26 TCP packets (40265/1500) and every multiple of 26 TCP packets above that should result in frame queuing and double the latency.

I believe if we're shooting for 200ms and under, it can be sustained at 211 concurrent requests (200/24.6*26) and saturate the gigabit link. At 1,000 concurrent requests the latency should be 1.056 seconds (1000/(26/24.6)).

If any of my constants or formulas are crap, let me know. :)
0
hackerbob
Asked:
hackerbob
  • 3
  • 2
1 Solution
 
Norm DickinsonGuruCommented:
That appears to be correct math for the theoretical latency. However, as Yogi Berra once said, "In theory, things work the same in theory as they do in practice. In practice, they don't."
0
 
hackerbobAuthor Commented:
Has anyone done any latency planning like this? Was the formula similar?
0
 
Norm DickinsonGuruCommented:
Here are some interesting links for this kind of planning. I have done this sort of planning and did it with basically the same formulas you used, and it turned out pretty close to the measured results once the final installation was assembled - after we then factored in a few things that hadn't been part of the original plan of course.

http://serverfault.com/questions/137348/how-much-network-latency-is-typical-for-east-west-coast-usa

http://bradhedlund.com/2008/12/19/how-to-calculate-tcp-throughput-for-long-distance-links/

http://www.numion.com/calculators/Distance.html

http://www.networkworld.com/community/node/58789

http://www.itu.int/ITU-D/asp/CMS/Events/2009/PacMinForum/doc/Theme-2_O3b_Latency_White_Paper.pdf
0
 
hackerbobAuthor Commented:
Thank you, I feel much more confident in my numbers now.
0
 
Norm DickinsonGuruCommented:
Best of luck!
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now