Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 525
  • Last Modified:

Hardware Recommendations - Distributed Trunking for dual NIC servers

I am looking for some ideas with regard to finding a set of switches to replace my HP ProCurve 26xx's and 28xx.  Right now, we have about 6 28xx and 1 26xx with one of the 28xx acting as a "warm" standby in case of failure.  What I mean by "warm" standby is that it is powered on, and attached to the backplane, but no clients are connected to it.  This switch will be connected to clients only if one of the other switches fails.  So kind of like a cold standby, except it is connected to the tail end of the backplane.

I want some recommendations of some 24 port or 48 port replacements where I would still keep one as a "warm" standby, but I want to be able to do "distributed link aggregation" across switches as shown here...
Distributed Link AggregationThe servers in the diagram are connected to switches on the top and the bottom of the switch stack, so if there's a failure, clients will have redundant connections to each nic.  This means the switches need to support distributed link aggregation across different devices in the stack.

What do the experts recommend for switch hardware that supports this feature and may still be easy on the budget?  PoE is not required for this application.  OR do you have a recommendation for a different approach all-together to make the network robust and resist failure?
0
Trenton Knew
Asked:
Trenton Knew
3 Solutions
 
gheistCommented:
It all depends on client software support. Dual network paths are good, but you need to trest them in production-like conditions.

If you allocate more than 2 ports for switch interconects you get better resilience than depicted token-ring topology.
https://en.wikipedia.org/wiki/Network_topology#Mesh
0
 
kevinhsiehCommented:
Windows 2012 and above has native NIC teaming. If you use switch independent teaming, you can connect your servers to two or more independent switches with zero special configuration. I also use Broadcom or Intel NIC teaming for older OS like Windows 2008 R2, and that works fine as well. We have moved network connections on live production servers during the day without issue. My switches are not stacked. I am using various Cisco switches and the only "special" configuration is that they are configured for edge devices so spanning tree is turned off for those ports. When pulling the active connection we usually drop 0 or 1 pings from Windows.
0
 
Trenton KnewOwner / Computer WhispererAuthor Commented:
These particular servers are 2003 for the moment, but we are using Broadcom NIC teaming on those devices, and right now, they are pretty stable, but if we remove one of the NICs connections, it seems like the re-convergence is rather slow.  Sometimes we have to reboot the server to get it to function on the network at all.  I'm sure that we can do a better configuration, but my CIO wants to replace the switch stack with newer hardware anyway.  (most of the ports are 10/100 and he wants to go 1GbE on the client nodes/access layer)

Now, I'm considering throwing up 2 Netgear M5300-28G switches at the top of the stack with Virtual Chassis Stacking and just using smart switches for the rest of the access layer.  This still means around $3200 for those two for the distributed trunk, but at least I can save money on the other access layer devices. and have a 10GB backplane between the devices.
0
VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

 
Aaron TomoskyTechnology ConsultantCommented:
I was looking at those netgears and decided to spring for brocade icx 6450 in a stack and I've never been happier. Before the brocades I had just netgear smart switches and would get slow performance or intermittent issues that would be fixed by a switch reboot. They just can't handle a larger network with decent traffic. However smart switches might be fine for your access layer. I think it was the trunks and vlans that were making my smart switches work too hard.

If you virtualized your servers, even with free esxi, you can run trunk ports to two nics from two different switches and esxi load balances them just fine. Technically one connection to one workstation can only use the bandwidth of one nic, but workstations are only 1g anyway. No setup in the switches required at all, it just works. You can even unplug a nic and it works, move it to another switch, it still works.
0
 
Trenton KnewOwner / Computer WhispererAuthor Commented:
Thanks for all your input, experts.  

I checked out the Brocade switches, the little 8 port guys intrigued me, but I couldn't see in the documentation where they supported cross-stack link aggregation, which is kinda the main feature I was looking for for my servers.  I just want to eliminate single point of failure, but the bigger icx switches cost considerably more.  

Allocated extra points to Aaron because he was the only one who actually recommended some hardware.
0
 
Trenton KnewOwner / Computer WhispererAuthor Commented:
On another note...  TP Link has this guy coming out soon.  I hate that it only has a 1 year warranty, but I'm intrigued and wonder what the price point and release date will be.  

TP-Link T3700G-28TQ
0

Featured Post

Who's Defending Your Organization from Threats?

Protecting against advanced threats requires an IT dream team – a well-oiled machine of people and solutions working together to defend your organization. Download our resource kit today to learn more about the tools you need to build you IT Dream Team!

Tackle projects and never again get stuck behind a technical roadblock.
Join Now