network utilisation of windows servers

Please excuse my ignorance in this area, I am from a risk management background as opposed to a technical infrastructure/windows networking background, however I am researching capacity monitoring of servers. 4 metrics I have read which are generally seen as useful to monitor in any IT environment, for each of your production servers include:

CPU capacity / utilisation
Memory capacity / utilisation
Network capacity / utilisation
Storage capacity / utilisation

I am familiar with the basic concepts of CPU, memory, and storage capacity monitoring, i.e. check to see CPU and memory utilisation to ensure it is not near 100% capacity or you may face performance issues and even system unavailablity, and again with space issues, i.e. 90% of capacity used could again lead to performance issues, potential data loss, system unavailability etc.

However, from a servers "network utilisation" standpoint I am not entirely sure what this is looking at? i.e. the server itself doesnt have any "network capacity limit", so my questions were (sorry if they are at a basic level):

1) what in terms of network utilisation in mb/per second is this metric looking at?
2) what would be defined as a dangerous utilisation %, and if the server is running at a dangerous threshold,
3) what can be done to improve things if the current network utilisation is at a dangerous threshold, i.e. you can add memory, CPU, storage capacity resources to a server, but what can you do or add to the server for network utilisation capacity problems? If anything, or what other area of the infrastructure do you need to look at if the server is using excessive network utilisation in terms of MB/S.
4) what causes a server to use excessive network utilisation in terms of mb/s - can there be specific events, or is it linked to insufficient hardware to deal with the requirements of the server?

Please keep your answers tech freindly and management freindly if possible.
LVL 3
pma111Asked:
Who is Participating?
 
KimputerCommented:
1) network can be Mpbs or MB/s, but also in percentage % (of the maximum speed of the hardware, or negotiated speed)
2) there is no dangerous thresholds with networking. If it runs at 100% it just means people have to wait longer for their files. It happens a lot, when big files are copied to and from the server.
3) if it's always 100% and people suffer from waiting times (like graphic designers or movie producers), the only way to improve it is
a) improve whole network (bring to gigabit speeds), means all network switches, all network cards must be upgraded
b) improve speed of the file server, as gigabit speeds sometimes surpasses the hard disk speeds. So have faster hard disks in the server
4) big file transfers.
0
 
JohnBusiness Consultant (Owner)Commented:
CPU should generally run under 25% unless big jobs are running (and then that should be temporary).

DISK should normally have 25% free at least for updates and temporary files.

MEMORY should have at least 25% free once all processes and jobs are running.

These are very general points I look at on client servers.

NETWORK is a function of who is using it and often show up in Internet when people download large items like videos. Kimputer gave a good description above.
0
 
Michael RojekCommented:
The thresholds you set will vary based with how and for what the device is being used. If you get the trial of NetCrunch it has predefined "Monitoring Packs" that have pre-set thresholds and alerts for various types of devices, including Servers. You can use those thresholds to get started and give you an idea. But perhaps a better option would be to set up some baseline thresholds, and then be alerted when there is a 5% or 10% change from that threshold, with spikes removed.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.