Go Premium for a chance to win a PS4. Enter to Win

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1582
  • Last Modified:

COnfiguring Vmware Networking for a server with 4 network ports (Dell R710)

Hi VMware experts.

Currently I'm running ESXi (paid for version) on a couple of Dell R710s.

At the moment each server configured via vSpehere so the Management Network and all the VMs on the Server are just using one of the 4 physical network cards.

Obviously this isnt making best use of the hardware. How can I set this up so I can make best use of the four cards sharing amongst the VMs? I'm using storage on the server so dont need to worry about SAN links.vSphere diagram of network config
I'm looking for a dummies guide with a little bit of hand holding.  Before suggesting RTFM, I've had a look at the VMware docs and need further guidance.  Am familiar with networking but not VMware networking...

Thanks!
0
jmsjms
Asked:
jmsjms
  • 5
  • 4
1 Solution
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
If you have no SAN, no vMotion, no iSCSI, then this gets pretty simple....

you need to decide two things, do you want your management network ports to share the same network as your Virtual Machines.....

do you have a seperate management network in your organisation? (e.g. different IP Address range and network to your production Servers)

if so, you will need to split.....your VMs and Management onto two different vSwitches and I would suggest at least Two nics for resilience for both.

If you are happy, and do not have a dedicated Management Network, and are happy for management traffic to share the same network as VM Server traffic....

then just add four nics to the same vSwitch.....

You will then need to select a Teaming Policy, which is based on the Physical Network Hardware you have...

e.g. build network trunks with 4 x ports, etc or do not user trunks.

(trunk ports are best)
0
 
jmsjmsAuthor Commented:
Hi Andew, Thanks for your reply.


>do you want your management network ports to share the same network as your Virtual Machines.....

I get the feeling that separating them into seperate networks would be viewed as best practise?  In real life for a smallish company is it worthwhile? Currenlty they are currently on the same IP subnet and Physical network as the VMs.

Could we start by sharing and when things have settled down I'll look at splitting them up.

>do you have a seperate management network in your organisation? (e.g. different IP Address range and network to your production Servers)

By Management network, do you mean the IPs of the ESXi host?  As above.

> if so, you will need to split.....your VMs and Management onto two different vSwitches and I would suggest at least Two nics for resilience for both.

So at the moment I dont, but I'll look again at it a a future date. thanks.

> If you are happy, and do not have a dedicated Management Network, and are happy for management traffic to share the same network as VM Server traffic....

 >then just add four nics to the same vSwitch.....

OK. How do i do this?  Is it just going to switch properties, then add the network adapters?

Can I add the NIcs to the vSwitch without downtime?  

> You will then need to select a Teaming Policy, which is based on the Physical Network Hardware you have...

I can setup a LAG group on a switch to setup Teaming.  Do I have to do anything on the Server end?

Does it matter if I leave it for a while without teaming?  If I dont setup team, what happens?

 > e.g. build network trunks with 4 x ports, etc or do not user trunks.

> (trunk ports are best)

Could I have a NIC per VM to seperate traffic or it is better to run a team and let VMware sort it out?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Okay, if you are a small-ish installation or no management network keep it simple.

So yes, just add the additional network interfaces to the vSwitch.

Edit the vSwitch Properties, and add nics.

We always recommend any networking work, be done outside or production hours, it should not cause massive mounts of downtime, e.g. 1-2 pings lost, but if this is a production server with VMs, and you make a human error, ALL will be down!

There are at least two teaming policies:-

Route Based in IP HASH (which needs physical switch configuration e.g. Trunk (non-LACP, Static only)

The Default Policy Route Based on the originating virtual port ID will work for you, with no switch config...

as for allocating a nic per VM......you'll get more bandwidth per VM with four on the vSwitch
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
jmsjmsAuthor Commented:
>There are at least two teaming policies:-

Is this both have to be configured or a choice?

>Route Based in IP HASH (which needs physical switch configuration e.g. Trunk (non-LACP, Static only)

I could set a Team on the switch, it that what you mean.  What I'm concerned with is do I need to set up a corresponding team on the Server?

 >The Default Policy Route Based on the originating virtual port ID will work for you, with no switch config...

Don't understand you sorry. Do you mean just plug the cables in and add the NICs to the vSwitch without further config?

> as for allocating a nic per VM......you'll get more bandwidth per VM with four on the vSwitch

Understood thanks.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
You select a teaming policy on the vSwitch on the Host ESXi server.

If you do nothing, other than just allocated and add 4 nics, plug them into your physical switch and do nothing, the default teaming policy will work.

if you want to do anything fancy, change the teaming policy from the default this requires physical switch config!
0
 
jmsjmsAuthor Commented:
>You select a teaming policy on the vSwitch on the Host ESXi server.

>If you do nothing, other than just allocated and add 4 nics, plug them into your physical switch and do nothing, the default teaming policy will work.

So, just add the NICs and it will work?

Will they all need a seperate IP address or does the vSwitch sort this out?

> if you want to do anything fancy, change the teaming policy from the default this requires physical switch config!

Is there much to gain to set them to a LAG group on the Switch?  

I think asking about the teaming policy warrants another question. I'll post one up when I've got it working with the default setup.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Yes it will work!

The benefit of creating correct etherchannel, static trunks is for data incoming into the ESXi Server!!!!

Traffic flows both ways in and out of the server, the Teaming Policy on the Server effects outgoing traffic only (e.g. from the VMs), Incoming traffic is defined by the physical switch configuration.

Production - High Bandwidth....inbound and outbound traffic

Create a Trunk with 4 x ports, and change the teaming config to IP HASH!

Teaming Policy has to be matched to your networking physical switch and it's config.

We use IP HASH with HP Switches.

http://blog.scottlowe.org/2006/12/04/esx-server-nic-teaming-and-vlan-trunking/

(and LACP is not supported)
0
 
jmsjmsAuthor Commented:
Wow, thanks for the info.  I'll mark it as answered as I wont be able to progress to next week.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
No problems, good luck....
0

Featured Post

Get your Disaster Recovery as a Service basics

Disaster Recovery as a Service is one go-to solution that revolutionizes DR planning. Implementing DRaaS could be an efficient process, easily accessible to non-DR experts. Learn about monitoring, testing, executing failovers and failbacks to ensure a "healthy" DR environment.

  • 5
  • 4
Tackle projects and never again get stuck behind a technical roadblock.
Join Now