how to choose best switch configuration. 10 gigabit or 1 gigabit

I'm a little confused in this area and have been told a few things but am struggling to find clear documentation in this field.

Here is my problem.

I am trying to spec up the best switch configuration for some client sites...

The user machines all have 1 gigabit network cards.

I have been given differing advice from people...

1. one person told me if all my clients are 1 gig. there is no point installing a 10 gig card in my servers feeding a (example) 24 port 1 gig switch with 4 times 10 gig ports (for the servers). as the clients cannot take advantage or subdivide the 10 gig traffic... is this true?
2. Others have said this is wrong?
3. Someone suggested I am best of with a high speed core switch for the servers (10 gig), and a 1 gig client edge switch (with multiple 1 gig connections from the core switch to the edge switch)?

Im happy to invest in the right configuration, on one site I have about 50 one gigabit clients and 4 servers, I'd be happy to invest in a good core and edge switch as long as I know how I am going to get the best speeds I can to my clients.

Can anyone give me some clear advice on the subject please?
sfabsAsked:
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

x
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Kash2nd Line EngineerCommented:
In order to support 10G traffic from servers to Clients workstations at 10G speed, you need to have identical hardware on both ends.

if you have a 10GB connection from a server to a 1GB switch, the network speed will be 1GB which is the slowest device.

If you have special requirements i.e: a backup device which supports a 10G card, you can use the connection from server to backup device and data will transfer at 10G speed.

Unless you have a budget to upgrade all workstations to 10G NICs and switches and all other intermediary equipment then staying at 1G is what I would suggest.

However, for config purposes to optimize performance, you can invest into Managed switches which will allow you to setup topologies to match your network.
Hope this helsp.
Jan SpringerCommented:
What you really need to be concerned with is multiple clients sending traffic up to 1G to these servers simultaneously.

If you have or will have sufficient traffic to clog the pipes, install 10G.

The only way that you will know for sure is to monitor current port utilization on all switches.
KimputerCommented:
Thinking only in Gigabit speeds and nothing else, isn't very budget efficient. You can invest thousands and thousands of dollars in the highest speeds swtiches, but if that company has an Exchange Server 2010 with 8GB RAM, Outlook WILL HALT to a standstill, no matter what switches you buy.
The speeds the switches are capable of, mostly exceeds whatever the other hardware (mostly servers or storage devices) are capable of. Some disks are maxed out at 20MB/s. In a networking environment, serving a few people, that already drops to a few MB/s per user (if copying files concurrently). So while it is nice you have budget for 10Gbit equipment, have a look at other hardware, have a look at how people are working (do they even copy big files across the network?), are there other bottlenecks (slow servers, slow hard disks)?
I'd rather have a 1Gbit network, with the extra budget spend on solving bottlenecks, than using all the budget for 10Gbit, and letting the bottlenecks be what they are.
Rowby Goren Makes an Impact on Screen and Online

Learn about longtime user Rowby Goren and his great contributions to the site. We explore his method for posing questions that are likely to yield a solution, and take a look at how his career transformed from a Hollywood writer to a website entrepreneur.

Bill BachPresident and Btrieve GuruCommented:
I disagree with Kash on this one.  You CAN benefit from 10Gbps even if the workstations are at 1Gbps, as JanSpringer indicated.  

Switches create unique collision domains, thus allowing each connection to operate at full speed in each direction (theoretically).  On a typical "file server", however, data flow is often unidirectional.  Imagine a user downloading a large file -- there are a few small requests going to the server, and much data flowing from the server to the workstation.  The reverse will be true when the user is copying a file up to the server.  Of course, when the user is idle, then nothing is going over the wire.  Because of this, Kash's statement is usually true -- you don't need the extra speed.  However, if you have 10 people all downloading very large files at the same time, each user will want to get the file as fast as possible.  If each user is able to get data streamed at 0.8Gbps (there will always be some latency and other delays to prevent full utilization), then the aggregate data that is being sent from the server to the switch is 8Gbps -- and having the faster pipe will make sense here.  

In short, you do need to look at the load.  If you are a printing/graphics/movie house and moving 40GB files from the workstations to the server all the time, then the faster network connection is definitely worth it.  As Kimputer indicates, though, if you are doing some basic Email, then you'll never notice the extra speed.  The same will be true for a client/server database environment, where the applications are running on the client and the database is on the server.  A system like this typically uses lots of very small packets, and the overall performance is more related to the network latency (the time it takes for a packet to traverse the network) that the bandwidth.

If the users are able to offer some advise on what they are doing (or if you can accurately measure the load on the network), then you might be able to pick a solution out in advance.  If they KNOW that they are hammering the server connections, then plan for 10Gbps links to each server.  If they KNOW that they are mostly idle, then save the money.  For my money, though, I would hedge my bet and use a 1Gbps switch which supports one or more 10Gbps uplink portss -- either built-in or as add-ons (such as GBICs).  I'm thinking like the HP 1950 switches, though you can likely find something from your favorite vendor.  With a capable solution (and the appropriate management capability to see traffic specs, too), you can start your servers at 1Gbps connections, and if you see the load is much over 50-60%, then you can think about upgrading the server NICs one at a time.
Mal OsborneAlpha GeekCommented:
If it were me, I would be looking at 10Gb between the switches and servers, with 1Gb to the workstations.  An 8 port , 10Gb switch with 4 connection to the servers, and 2 to the other switches should work well.

If you are doing any sort of transfers between RAID volumes, like a Disk to Disk backup, then a pair of modern servers will easily saturate a 1Gb connection.  Also multiple users pulling down huge files would be noticeably slower if there was a single 1Gb link in there anywhere.  On most sites, however, this is a rare thing; unless you have users who copy movies around routinely or something, the difference would not be noticeable.
Kash2nd Line EngineerCommented:
good thread. there's a lot of advice here.
I have installed switches and configs where it was necessary to put 10G kit in based on initial analysis of the business but then we have some place where we are ok with 1GB or even 10/100.

good luck
sfabsAuthor Commented:
Hello Everyone

Thank you for your input.

OK... this is part of a site upgrade. The servers and switches have reached end of life, the clients have been upgraded. The servers will wait till next year. The switches wont as I am already short of switches and ports so it's time to deal with the switches.

We have a positive budget. So I am interested in good equipment and configurations, best practices and future proofing. As long as it doesn't cost the world.

BillBach

This was my original thought. 10 gig ports on the switch connecting to the servers and 1 gig to all clients.

But someone I respect suggested I investigate further as they didnt believe that with 1 gig clients you could simply sum to 10 gig. you made the example 10 x 1 gig clients each using 800MB/sec data = 8 gig, therefore 10 gig connection makes sense... right... they said they thought this was incorrect, but werent sure and suggested I investigate... so here I am... true or false? or does it depend on switch specification?

Malmensa

Investigation led me to what I believe is similar to what you are saying when working to best practice.

for example... 8 port 10 gig core switch: 4 x 10 gig connections to 4 servers, 2 x 10 gig connection to Edge switch in building A (edge switch has 2 x 10 gig ports to communicate with core switch and remaining ports are 1 gig for clients), ANOTHER 2 x 10 gig connection to Edge switch in building B (edge switch has 2 x 10 gig ports to communicate with core switch and remaining ports are 1 gig for clients)

this is where my mind was going... what do you think? what switch specifications should I be looking out for...? managed? switch backplane limit?
Bill BachPresident and Btrieve GuruCommented:
If you replace the servers next year, you can easily justify 10Gb for them.  Using a 10Gb backbone to the edge switches in each building is good, too.  I DEFINITELY recommend running a minimum of TWO 10Gb fiber connections from the core to each building, but it is up to you if you want to bond them together to a 20Gb channel, use them as fail-over, or simply save one of the fibers as a spare connection.  (If you intend on using both, then you might want to run 3 or 4 fiber links, so that you have a spare.  Running one cable can be expensive -- but running 4 cables is only slightly more than running the first, since the labor is just about the same.)  

As for the core switch, if you plan on using bonded connections, this will leave you with no spare ports on the core switch -- and no room for expansion for new servers, storage, or new drops.  When evaluating core switches, look at possibly getting a 12-port or 16-port.  Again, the price difference is usually minimal when going to the next size up now, but buying a second 8-port 10Gb switch at some time in the future and linking them together will likely cost more (and leave you with only 14 usable ports).

Now, back to the switching theory.  Ultimately, the total throughput possible for a switch is known as the "capacity" of the switching fabric.  It is usually measured in frames per second, not in bits per second.  This is because each frame requires overhead in the switch CPU, as well as other overhead (interframe gap, etc.).  

Imagine a superhighway with 10 lanes of traffic.  There is a certain number of cars per hour that can traverse the highway at speed.  There is always some spacing in between the cars (to avoid collisions), so if you take a snapshot from the top down, you will always see free space on the road -- it'll never be packed 100%.  (Imagine if all the cars on the highway were traveling at full speed with only 6 inches between them -- yikes!)  This is the overhead.  If you have trucks on the road (i.e. larger packets), then you will see less of the road from the overhead view (i.e. better utilization), but the number of vehicles (packets) on the road may be just about the same.

Now, each network segment (from a user PC) can handle only two frames at a time, one inbound and one outbound, also known as full duplex).  More commonly, though, user PC's simply request data and then sit there idle waiting for the reply, so many conversations are inherently half-duplex.  When a packet comes into the switch, it gets processed, and the target port is selected based on the MAC address and the MAC table kept internally by the switch.  The packet then gets retransmitted down the proper link.

This is like an on-ramp, which allows for one stream of cars (packets) to come onto the highway.  As long as there is room on the highway (switching fabric), cars (packets) can be received by the road (switch), and there is plenty of open room (CPU time) to send the car (packet) to the outbound port.

Now, remember that each cable will only accept a single packet at a time.  This is the argument AGAINST using 1Gbps connections to the server.  If you have 10 lanes of traffic, and they all have to exit at the end of the road through a single lane, then a bottleneck will occur.  You want a larger off-ramp to the busiest location because it will support 10 lanes of exit traffic at the same time.  With a single on-ramp, you can accept a full stream of cars, and they can arrive at full speed at the destination.  With 10 on-ramps, each running at full speed, you can have all 10 arriving at the destination via the larger off-ramp at full speed, too.

[Aside: In reality, you don't get 10 packets at EXACTLY the same time, right?  You still get only one packet at a time on the 10Gbps connection.  However, you can transmit a single packet in 1/10th the time it takes to transmit a packet at the slower speed.  As such, you can process that many more packets in the same time frame.  Thus the CAPACITY will be roughly 9x higher (you also lose some efficiency due to the interpacket gap and other overhead).

Now, back to the frames versus throughput/speed/bandwidth.  When you look at the switch specs, you'll see frame counts per second, but you should ALSO see the frame size used for the calculation by the vendor, as this matters, too.  Remember that a 64-byte frame requires the same switch CPU overhead as a 1500-byte frame.  However, sending 1000 64-byte frames is only 64KB of throughput, whereas sending 1000 1500-byte frames is 1500KB of throughput.  This is yet another reason why many vendors support using Jumbo frames.  Getting to 8KB in each frame would allow you to transfer 8000KB in the same packet count, and thus the same CPU overhead.  

You can also look at the distribution of network traffic.  Do you have equal loads on all 4 servers?  If so, then you want a switch with a switching fabric capacity that can handle all of them running at full speed.   If your switching fabric will only support enough traffic for frames totaling 10Gb connection, then each connection will be limited by the total thoughput capability of the switch.  

Ultimately, it really comes down to total cost.  Building a 10-lane highway verses a 1-lane highway is a lot more expensive, and 10Gbps cabling is more expensive than 1Gbps cabling, as well.  However, they don't build one-lane highways any more.  Why?  Because adding a second (or third or fourth) lane is only marginally more expensive than 1 lane, and the benefit for future expansion is certainly there.  If you're buying new switches and rewiring anyway, the marginal extra cost is easily justified, as you'll likely avoid the need to upgrade (repave the road) in the future.
sfabsAuthor Commented:
Brilliant... just brilliant BillBach

Exactly the information I was looking for... the analogy was perfect as well, cars, motorways... clear as day for me, thank you.

Just quickly, you have mentioned Fibre and bonding to get 20gig to the switches. If length wise (90m) cat 6 is possible is not better (cheaper) to stick with Cat 6 then fibre? Is bonding the only way to make use of 2 connections (a managed switch wont balance traffic through 2 connections)?

That's the last link in my chain, then I'm ready to go...

Thanks again
Bill BachPresident and Btrieve GuruCommented:
While you might be within the allowable line length for Cat 6, whenever you mention going between buildings, I worry about a condition called floating ground. The ground potential of a circuit is relative to the earth, but the potential of the earth can change in certain conditions. Two buildings will likely have two different grounding points, and the potential of those points can therfore vary, unless this is otherwise addressed. The net result, if you connect the two buildings with a wire, is that voltage can traverse the link if the grounds vary, especially during storms. This can cause damage to equipment.  By using fibre connections, you eliminate any worry about line length, as well as grounding issues.

Normally, switches will not use two links to communicate with each other, and although some switches will automatically detect this condition through the spanning tree protocol, having such a link is usually an error condition because it can create a switching loop. Check with the vendor of your switch equipment to see how they recommend handling either redundant or bonded links.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
sfabsAuthor Commented:
Outstanding Knowledge from the Expert, absolutely outstanding!
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Switches / Hubs

From novice to tech pro — start learning today.