VMWare NIC teaming benefits?

OK, I'm more of a software guy and I inherited this VMWare setup, so please be gentle.

I have 2 ESXi 4.1 hosts here at my office. Dell PE R710. Each has an additional 4 port Broadcom NIC for a total of 8 ports each.

The outside firm plugged just 1 cable from each host to the switch. I plugged in a few more and I see them working in the vsphere client.

I see a vSwitch0 was setup with all 8 nics. I'm pretty familiar with nic teaming and how I would do this on the vmware side as well as the switch side.

Question is - what should I do? Connect all 8 ports from host to switch, then setup an etherchannel on the switch? Will this get me faster bandwidth from host to host?

Or should I take 4 ports from the host and create an etherchannel, then take the other 4 ports and create a 2nd etherchannel?

Or just plug all 8 into my switch and call it a day?
Who is Participating?
wjconradConnect With a Mentor Commented:
I would recommend segregating your traffic across three or four separate Etherchannels on separate vLANs. One for management / high availability heartbeats, one for vMotion, and one for VM traffic. (And possibly one for VMKernel storage iSCSI or FCOE traffic if you're using those.) You'll need to create 3 separate vSwitches.

Please note that individual VMs won't be able to push more than 1Gb of traffic unless you install VMware tools and use a VMXNet adapter, preferably VMXNet3.

Here's a VMware presentation oriented around vSphere 5 that should walk you through the best practices. http://www.vmware.com/files/pdf/support/landing_pages/Virtual-Support-Day-Best-Practices-Virtual-Networking-June-2012.pdf
Andrew Hancock (VMware vExpert / EE MVE^2)Connect With a Mentor VMware and Virtualization ConsultantCommented:
You have got teaming for resilience/failover and additional bandwith.

You need at least two nics per service, e.g

two nics for Management Network
two nics for vMotion
two nics for iSCSI
two nics for Virtual Machine Network

all can be acehived in using VLANs,

or trunked ports.

for iSCSI, MPIO is the recommended option, see my EE Article

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 4.1

To full configure you system, you need to setup teaming policy on the server, responsible for outbound traffic, and trunk for incoming traffic.
Aravind SivaramanTechnical Subject Matter ExpertCommented:
I would suggest to segregate the Management, vMotion, VM traffic and in case if you use IP based storage(iSCSI or NFS) then create a seperate traffic for that as well.

Create a multiple vSwitches as suggested above and assign the Uplink accordingly
cb_itAuthor Commented:
OK, sorry for abandoning this question. Auditors in the last few weeks so I've been swamped with them, no fun.

OK, let's change this a bit. All of my VM's are on local storage, no SAN/NAS here. No vMotion.

I have 2 ESXi hosts, and a separate Win2008 physical server with Veeam 7 doing our backups. The Win2008 server also has 4 nics. Can I set the Win2008 server up as an etherchannel on my Cisco switch and get faster Veeam backups??

I would also have to setup 4 nics from the ESXi host as an ethercahnnel on my switch as well? I could get 4GB throughput for backups? Is that possible?? Thanks.
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
This is now going off-topic, but quickly.....

Yes you can create a Network BOND, e.g. 2GBps by combining two Ethernet 1GBps network interfaces, we do this with our backup servers.

I believe Cisco refer to it as a channel group.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.