Link to home
Start Free TrialLog in
Avatar of ANS Helpdesk
ANS HelpdeskFlag for Ireland

asked on

Is a 10GB Network the answer???

This is more a request for advice rather than a problem to solve. We have a client with a single host server running 2 VMs via Hyper-V. The server is a good spec HP with 64GB or RAM and 16 cores. The host VMS are a PDC and an application server. The application server runs a client/server legal document management software system. There are 25 users and some have complained about the speed of the software. The software can hang, crash etc. The client machines are very good spec - 16GB, SSD, i7 Windows 10 Pro - GB NIC.

My question is would it be worthwhile in this scenario putting in a 10GB network. Purchase a 10GB card for the server and a new Switch cable of running the backbone?

The files they open on the legal system are mainly office and PDF documents.

I would be interested to know if anyone had done this to address this type of issue before.
Avatar of atlas_shuddered
atlas_shuddered
Flag of United States of America image

Have you monitored resource utilization during problem times?  Check the nic utilization during same.  Is it becoming saturated (over 70% is bad, over 80% is terrible, over 90%, well you get the point.  I'd also sniff the port the server is connected to and see if you are seeing errors (retrans/resets/etc.).  If circuit utilization is low but you see erroring, check for microbursting on the interface.

I'd also check the memory and the proc utilization, disk i\o, etc.  Worst place to be in is to make a recommendation, pass the bill and not have a working solution on the back side.
SOLUTION
Avatar of John
John
Flag of Canada image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Very roughly, a single "rust spinner" desktop hard drive or "nearline" server drive can use all the bandwidth of a 1Gb network connection. Assuming you have a RAID subsystem of at least 3 "online" server drives, and a reasonable SSD in the client machines, you should be able to manage 3Gb  or so over a 10Gb LAN. These figures are only for a single copy of a large file, coping a heap of small files, or having several users copying at once will be considerably slower.

If you have a 10Gb connection from a server to a switch, and several client machines with 1Gb NICs simultaneously copying data, it MAY go a little faster than if the server connection were 1Gb, however not dramatically.

Common practice is to have 10Gb connections between servers and switches, and 1Gb to workstations. If you regularly copy large files from server to server, then 10Gb NICs make sense. It is pretty common to do a Disk to Disk backups this way, 3 times the throughput can make management of backup windows easier. Also, 10Gb requires CAT6 cabling, which is restricted to 55M, or CAT6a or 7 to go 100M. Unless you cabling is near new and in excellent condition it will give all sorts of horrible headaches running 10Gb, existing CAT5 infrastructure might work fine at Gb, but not at 10Gb. In a server room it is easy to get a few CAT6 flyleads, and lengths are usually only a few metres anyway.

So yes, upgrading to 10Gb NICs will give maybe a slight boost to throughput, but nothing dramatic. Given the details provided, I would not expect a noticeable difference to end user experiences. You really need to figure out what is occurring here, unless each lockup is accompanied by a steady throughput of close to 1Gb on the workstations or servers' NIC, then upgrading to 10Gb won't fix it.
Avatar of ANS Helpdesk

ASKER

Thanks all. The server in question is experiencing major AD issues and we need to rebuild it and create a new domain. Normally we'd run a Dell EMC Live Optics to determine the likes of i/o and performance but the server isnt stable enough. we will rebuild and run the analysis but I suspect adding SSDs will be the best value for money.
Thanks all. The server in question is experiencing major AD issues and we need to rebuild it and create a new domain. Normally we'd run a Dell EMC Live Optics to determine the likes of i/o and performance but the server isnt stable enough. we will rebuild and run the analysis but I suspect adding SSDs will be the best value for money.
Thanks for the update