network utlilization

Are there any figures for recommended network utilization at various strategic network points?

I've heard a figure bounded around on 7% average utilzation at server NICs, but would like to know if this is correct, and, what that figure should be at LAN switches and gateways.

Is this defined anywhere by any big players like Cisco etc?


Who is Participating?
elf_binConnect With a Mentor Commented:
Maybe not.  If I'm running at a 90% of bandwidth used on a gigbit link that means I have around 10Mbps free.  I can run some pretty cool applications on a 10Mbps network (think of the millions of people surfing, chatting, video streaming and downloading with less than 2Mbps).   If you're running on a 100Mbps network, then you have 1Mbps available bandwidth.  If I have to, then I'd then implement QoS or other traffic shaping technologies to ensure clients still have reasonable access to their applications on the network.  But that begs another question, what is reasonable access?  10 seconds?  1 second? 0.1 seconds?  Also is the CEO more important than an accountant running month-end?  Don't forget that the packets running through a 10Mbps link "travel" at the 10Mbps rate, they can't go slower than the transmission technology involved (well they can if your 1Gb link is auto-neg'd down to 100Mb - but that still runs at the same rate as the technology will allow it).  What slows things down is buffering space and queue lengths on the switches, routers, bridges and so on.

Personally, I work on a 75%-85% rule of thumb (depends on the network technology involved).  Nobody has ever told me that was some sort of "magic figure" and I completely made it up.  If a device on my network has a queue of packets that is approaching the physical limits of the device (like a router or a switch), then I'd:
1) Investigate why that is happening - is there some idiotic device that's just jabbering?
2) Check the device itself - has a CPU broken so packets can't be passed as fast?
3) Can I by-pass this device with some traffic (i.e.: route around it).
4) Can I alleviate the rate of packets coming at this device (traffic shaping)?
5) Replace or upgrade the device to something with more bite.
So how do you know how much buffer space space is available on your device?  Well that depends on the device.  Some "publish" this through the vendor stats in SNMP.  Otherwise add up the total about of octets coming at your device and subtract that from the available buffer space.  So how do you know how much buffer space is available on a device?  Some vendors "publish" this through SNMP, others, you have to ask.

Hope this helps.
I assume by the phrase "network utilization" you mean something like a measure of the amount of bandwidth being consumed over time, versus the total bandwidth available on the line?

If you're using 7% of all available bandwidth, you've over engineered your connection.  Why would you want to pay for bandwidth that you're not using?  Nobody's going to tell you how much bandwidth you should/should not use.  Networking companies what you to use 100% of your bandwidth so they can sell you their next product upgrade.  *Some* application vendors quote really silly things like "our application requires 50% of a 10Mb line", which is meaningless as you need time in there, or do they mean 5Mb per second is required for this application at all times and never changing regardless how many people use it?

You need as much bandwidth as your applications need.  If you measure your bandwidth at set locations for a set period at a certain time, the best you could say is at this particular time at this particular location we needed about x amount of the available bandwidth.  More important questions are how much do we use during peak usage times?  More importantly (and harder to gather) in my experience, what's are those peak usage times?  How many more applications are we going to add in the next x months?  Do we have critical or choke points on our network?  And things like that.

So no.  No vendor worth their salt would ever tell you how much bandwidth you need.

Hope that helps.
Dave_Angel_PortsmouthAuthor Commented:
I am aware that every network has utilization peaks of usage, but in order for a successful and timely response for any given application, there must be sufficent bandwidth availible at any given time.

I know that every network is different, and that the introduction of any new service would require profiling, but i was just after some rule of thumb figures.

If you are using upwards of 90% bandwidth all the time you are heading for a fall are you not?
Dave_Angel_PortsmouthAuthor Commented:
Microsoft make an observation that 7% is the maximum average for exchange server....

"The Bytes Total/sec performance counter shows the rate at which the network adapter is processing data bytes. This counter includes all application and file data, in addition to protocol information, such as packet headers. The Bytes Total/sec performance counter is the sum of the Network Interface\Bytes Received/sec and Network Interface/Bytes Sent/sec.

The Exchange Server Analyzer reports the maximum value for the performance counter during the collection interval. For a 100 megabits per second (Mbps) network adapter, the value of the Bytes Total/sec performance counter should be under 7 megabytes/second. For a 1000 Mbps network adapter, the value of the Bytes Total/sec performance counter should be under 70 megabytes/second. If Bytes Total/sec exceeds 7 percent of the bandwidth of the network, the Exchange Server Analyzer displays an error."
Fred MarshallPrincipalCommented:
Long ago I heard 5% for ethernet.  But, this was before the widespread use of switches in place of hubs, etc.

Much depends on how you define "bandwidth".  
The more average the number, the more derating required:
If you have a source that would run at 100Mbps if it could and would use 10Mbps bandwidth then that could mean it uses 100Mbps 10% of the time.
The former number could choke a 100Mbps network while the latter would not.
But, the real use is 100Mbps .. just not all the time.
If the 10% 100Mbps is for 2 milliseconds out of 10 milliseconds then it's unlikely that you will notice it.
If the 10% 100Mbps is for 5 seconds out of 50 seconds then it's likely that you will notice it.

Often you don't know whether things run for 2 milliseconds or 5 seconds or anywhere in between.  I believe that's why folks suggest a conservative loading factor - so that bursty "long" high loads won't disrupt things.

Another way to say 5% loading is:
"I'm going to want a safety factor of 20 so that bursty high loads (individual or combined) don't disrupt the network."
So then, 7% would be a safety factor of 14.  Different but not so much different, eh?

Obviously this all depends on what's going over the network.  Doing backups over the network?  That could cause high loads even though the backup application may not actually load up to the limit of the NICs involved - for various reasons having to do with how the hosts and the backup applications behave.
Casual internet browsing or low bandwidth transaction processing is unlikely to cause a problem just because the individual hosts' bandwidths are low.
Concentrated transaction processing at a database server could  be a high load.
etc. etc.

In general, look for choke points in the network topology.
The internet gateway is one.
Servers might be amongst them.
Single cables would definitely be one because, no matter what, ALL traffic that must go on a single cable will potentially collide and require retransmit and some slowdown.  
That might be an argument for a distributed system of file servers, internet gateways, etc. so that no one wire is clogged with traffic.

[It's interesting to ponder how much "better" a switch is compared to a hub.  I mean a switch that will route packets from port A to port B without colliding  packets going between port C and port D.  If all of the packets have to initiate or end up on a single wire at port F instead then I don't see any benefit of the switch feature.  This would be the case if the switch were serving an internet gateway or a server or ......  All of the packets eventually get on the one wire where they can collide.]

I don't see how 75% - 85% loading can work very well UNLESS:
- in a system with many nodes, the "loading" is determined by the NIC bandwidths or something rather drastic like that and not by adding up all the actual application bandwidths.
- in a system with only two nodes then sure..
In fact, if you used NIC bandwidths for bandwidth utilization calculations you could hook up 20 computers and assume 5% utilization each to get 100% "loading" - with 5% being actually a pretty big number for a workstation).
I don't see much difference between 75%-85% as a standard and 100% as a standard.  The difference is just too small to matter much.  ... well, depending on what one really means by these numbers of course!  And, that is a key point.

Most of these numbers are pretty fuzzy so the margins on them need to be big.  5% or 7% are pretty much the same thing.  This applies in a generall office network.  But, in an "engineered" dedicated control system application or some such thing then you might have rather precise measures of bandwidth utilization.  In the latter you might well want to have a higher safety factor.  Nobody's perfect!

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.