3750 Cisco for VMware ESXi 5.5

Hello, I am building a small new VMware cluster with 2x Cisco C220 rack servers and 1 FAS2220 over NFS GbE

When searching for switches I came across the 3750's

Are these reliable for virtualization? I know of course they support VLAN trunking and stacking, i was just wondering if there might be something better in this price range?

Are the 3750's line rate forwarding capable?

We plan to use 6 GbE NIC's per server, one pair for management, one for vMotion/FT and one for storage (NFS).

Who is Participating?
mcsweenSr. Network AdministratorCommented:
You don't need layer 3 capabilities that come with that switch.  There are many cheaper options that will be just as reliable and perform just as well.  There is no sense in paying for Layer 3 if you are doing everything under Layer 2.  You only need Layer 3 if you want to do IP routing with the switch (router).

If you are going to use iSCSI; I suggest checking with your SAN vendor for a list of approved switches.  I use Equallogic SANs here and they do not support Cisco so I use a pair of Dell PowerConnect 6224s.

If you want a Cisco switch I suggest either Catalyst 2960S-48TD-L or Catalyst 2960S-24TD-L  (These are the same switch, one is 24 port the other is 48).  Just make sure you get one that is licensed for LAN Base and not LAN Lite as there are many options that are disabled on the LAN Lite version.

Overview - http://www.cisco.com/c/en/us/products/collateral/switches/catalyst-2960-series-switches/product_data_sheet0900aecd80322c0c.html

24 Port Switch - http://www.amazon.com/Catalyst-WS-C2960S-24TS-L-2960-Gigabit-Switch/dp/B003ICXAWW/ref=sr_1_2?s=electronics&ie=UTF8&qid=1394047037&sr=1-2&keywords=2960s-24ps-l

48 Port Switch - http://www.amazon.com/Cisco-WS-C2960S-48TS-L-Catalyst-Series-Switch/dp/B003ICX55Y/ref=pd_bxgy_e_text_y

Stacking Module (if you want to stack them) - http://www.amazon.com/Cisco-Optional-Flexstack-Stacking-C2960S-STACK/dp/B003M4H2ES/ref=sr_1_1?s=electronics&ie=UTF8&qid=1394047346&sr=1-1&keywords=2960s+stack+module

The stack modules come with .5 meter cables so you shouldn't need to buy any cables.
I can think of no reason the 3750's wouldn't work for you. There are the newer 3850's, but if you are talking about Gigabit 3750, they should work for you.
sk391Author Commented:
yes i see the 3850 is mostly marketed as access switch with functions we dont really need. Only 6 ports will be used on each switch.. basically as top of rack switches for 2 small racks.

yes the gigabit version is the one we are looking at..
Ultimate Tool Kit for Technology Solution Provider

Broken down into practical pointers and step-by-step instructions, the IT Service Excellence Tool Kit delivers expert advice for technology solution providers. Get your free copy now.

Yes 3750's would work good for you. I was going to tell you in your previous thread to purchase a switch so that you can connect these two components. I Actually left you a hint by telling how it is connected in flexpod.

I think its a good option to go ahead with 3750 switches.
sk391Author Commented:
Thanks I was looking at the 2960's too... I agree layer3 features are not needed on the VMware switches.

Is there any major advantage when stacking the switches, instead of just connecting them with both hosts? we will be using vSphere 5.5..
mcsweenSr. Network AdministratorCommented:
The stack cable for those switches has 24GB of throughput instead of the 1GB or 10GB (depending on the port and SFP module you use) available on a standard trunk uplink.

You also manage the stack as one so you only have to assign 1 IP address, login to one switch, etc...  Port 1 on switch 1 is gi1/0/1, on switch 2 it would be gi2/0/1, etc...

I would use two switches with stacking modules and plug one NIC from each server into each switch.  So if you are planning on use 2 ports for NFS then plug one into switch 1 and one into switch 2.  Do the same with the NFS server.  This allows for full redundancy.  Use round robin for your load balancing on the ESXi servers.  If you go to iSCSI later look for a SAN that has a multipathing driver for ESXi, like equallogic.

If you are on a budget you can always use etherchannel and use a couple ports on the front of each switch to link them together.  This is less throughput but still likely sufficient.
sk391Author Commented:
yes that's a good point. the storage is a NetApp FAS2220 which has 2x 4 GBe NICs (it has two storage controllers and each controller comes with 4 GbE NICs) for the connection to the hosts, so we can probably do two 2GbE etherchannels (each of the two hosts has 3 NIC pairs, and one pair will be used for the storage traffic)
mcsweenSr. Network AdministratorCommented:
If you are using that as your SAN I strongly suggest using iSCSI over NFS.  Then install the multipath module on your ESXi hosts.  Do not setup etherchannel on any ports connecting to hosts (SAN or ESXi) as the multipathing module will take care of load balancing.  Only setup etherchannel on the trunk uplink connecting the switches.

It looks like this SAN also has vmware integrated management.  This means there is likely a virtual appliance you have to setup in vmware.  I strongly suggest you do this and use the integrated manager to take your SAN level snapshots if you plan on doing that.
I agree with the Cisco 29xx series of switches being better value for money for this scenario.
sk391Author Commented:
Thanks for the thoughts

I also saw the Nexus 3048 but the list price is almost double that of 2960S, it seems they might only be used in very specific use cases? when connecting to larger networks probably in large deployments.
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.