Link to home
Start Free TrialLog in
Avatar of sfabs
sfabsFlag for United Kingdom of Great Britain and Northern Ireland

asked on

how to choose best switch configuration. 10 gigabit or 1 gigabit

I'm a little confused in this area and have been told a few things but am struggling to find clear documentation in this field.

Here is my problem.

I am trying to spec up the best switch configuration for some client sites...

The user machines all have 1 gigabit network cards.

I have been given differing advice from people...

1. one person told me if all my clients are 1 gig. there is no point installing a 10 gig card in my servers feeding a (example) 24 port 1 gig switch with 4 times 10 gig ports (for the servers). as the clients cannot take advantage or subdivide the 10 gig traffic... is this true?
2. Others have said this is wrong?
3. Someone suggested I am best of with a high speed core switch for the servers (10 gig), and a 1 gig client edge switch (with multiple 1 gig connections from the core switch to the edge switch)?

Im happy to invest in the right configuration, on one site I have about 50 one gigabit clients and 4 servers, I'd be happy to invest in a good core and edge switch as long as I know how I am going to get the best speeds I can to my clients.

Can anyone give me some clear advice on the subject please?
Avatar of Kash
Kash
Flag of United Kingdom of Great Britain and Northern Ireland image

In order to support 10G traffic from servers to Clients workstations at 10G speed, you need to have identical hardware on both ends.

if you have a 10GB connection from a server to a 1GB switch, the network speed will be 1GB which is the slowest device.

If you have special requirements i.e: a backup device which supports a 10G card, you can use the connection from server to backup device and data will transfer at 10G speed.

Unless you have a budget to upgrade all workstations to 10G NICs and switches and all other intermediary equipment then staying at 1G is what I would suggest.

However, for config purposes to optimize performance, you can invest into Managed switches which will allow you to setup topologies to match your network.
Hope this helsp.
What you really need to be concerned with is multiple clients sending traffic up to 1G to these servers simultaneously.

If you have or will have sufficient traffic to clog the pipes, install 10G.

The only way that you will know for sure is to monitor current port utilization on all switches.
Avatar of Kimputer
Kimputer

Thinking only in Gigabit speeds and nothing else, isn't very budget efficient. You can invest thousands and thousands of dollars in the highest speeds swtiches, but if that company has an Exchange Server 2010 with 8GB RAM, Outlook WILL HALT to a standstill, no matter what switches you buy.
The speeds the switches are capable of, mostly exceeds whatever the other hardware (mostly servers or storage devices) are capable of. Some disks are maxed out at 20MB/s. In a networking environment, serving a few people, that already drops to a few MB/s per user (if copying files concurrently). So while it is nice you have budget for 10Gbit equipment, have a look at other hardware, have a look at how people are working (do they even copy big files across the network?), are there other bottlenecks (slow servers, slow hard disks)?
I'd rather have a 1Gbit network, with the extra budget spend on solving bottlenecks, than using all the budget for 10Gbit, and letting the bottlenecks be what they are.
SOLUTION
Avatar of Bill Bach
Bill Bach
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
If it were me, I would be looking at 10Gb between the switches and servers, with 1Gb to the workstations.  An 8 port , 10Gb switch with 4 connection to the servers, and 2 to the other switches should work well.

If you are doing any sort of transfers between RAID volumes, like a Disk to Disk backup, then a pair of modern servers will easily saturate a 1Gb connection.  Also multiple users pulling down huge files would be noticeably slower if there was a single 1Gb link in there anywhere.  On most sites, however, this is a rare thing; unless you have users who copy movies around routinely or something, the difference would not be noticeable.
good thread. there's a lot of advice here.
I have installed switches and configs where it was necessary to put 10G kit in based on initial analysis of the business but then we have some place where we are ok with 1GB or even 10/100.

good luck
Avatar of sfabs

ASKER

Hello Everyone

Thank you for your input.

OK... this is part of a site upgrade. The servers and switches have reached end of life, the clients have been upgraded. The servers will wait till next year. The switches wont as I am already short of switches and ports so it's time to deal with the switches.

We have a positive budget. So I am interested in good equipment and configurations, best practices and future proofing. As long as it doesn't cost the world.

BillBach

This was my original thought. 10 gig ports on the switch connecting to the servers and 1 gig to all clients.

But someone I respect suggested I investigate further as they didnt believe that with 1 gig clients you could simply sum to 10 gig. you made the example 10 x 1 gig clients each using 800MB/sec data = 8 gig, therefore 10 gig connection makes sense... right... they said they thought this was incorrect, but werent sure and suggested I investigate... so here I am... true or false? or does it depend on switch specification?

Malmensa

Investigation led me to what I believe is similar to what you are saying when working to best practice.

for example... 8 port 10 gig core switch: 4 x 10 gig connections to 4 servers, 2 x 10 gig connection to Edge switch in building A (edge switch has 2 x 10 gig ports to communicate with core switch and remaining ports are 1 gig for clients), ANOTHER 2 x 10 gig connection to Edge switch in building B (edge switch has 2 x 10 gig ports to communicate with core switch and remaining ports are 1 gig for clients)

this is where my mind was going... what do you think? what switch specifications should I be looking out for...? managed? switch backplane limit?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sfabs

ASKER

Brilliant... just brilliant BillBach

Exactly the information I was looking for... the analogy was perfect as well, cars, motorways... clear as day for me, thank you.

Just quickly, you have mentioned Fibre and bonding to get 20gig to the switches. If length wise (90m) cat 6 is possible is not better (cheaper) to stick with Cat 6 then fibre? Is bonding the only way to make use of 2 connections (a managed switch wont balance traffic through 2 connections)?

That's the last link in my chain, then I'm ready to go...

Thanks again
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of sfabs

ASKER

Outstanding Knowledge from the Expert, absolutely outstanding!