Link to home
Start Free TrialLog in
Avatar of snyderkv
snyderkv

asked on

Standard practice vSwitch design?

We have sites with inconsistent vSwitching configurations so I wanted to ask for a good vSwitching document that lays this out based on most setups in the field.

One layout we have is 4 vmnics going to  two upstream Nexus switches.
vmnic0 and 2 going to Nexus A
vmnic1 and 3 going to Nexus B

vmnic0 and vmnic 1 connected to Mgmt, server port groups and vMotion.
vmnic 2 and vmnic 3 connected to Mgmt, Storage and vMotion.

It's all on vSwitch0. It just doesn't look right and theirs configuration drift among other hosts as well.
Avatar of Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Andrew Hancock (VMware vExpert PRO / EE Fellow/British Beekeeper)
Flag of United Kingdom of Great Britain and Northern Ireland image

Basically what you should have is at least dual network interfaces per service,

e.g.

Management Network - 2 nics
vMotion - 2 nics
Storage - 2 nics

Virtual Machines - 2 nics

and then the teaming policy per 2 nics, need to be configured to match your physical switches.

Then, Management, vMotion, Storage should all be on sperate individual networks/subnets or VLANs - what ever is in use at your organisation.

You can then also spread vMotion, Storage, Management, Virtual Machines across vSwitches.

It's very flexible, and there is not really a right or wrong way. BUT it should be identical on ALL hosts to make it easier to document and rebuild.
Avatar of snyderkv
snyderkv

ASKER

Ok but what if you only have 4 nics? That's my situation. Management is already dedicated to Management traffic via check box and vMotion as well so perhaps they can both use the same two nics? Then put storage on its own two nics? Storage traffic is already on different subnets so that traffic is already separated.

So why not all 4 nics for all traffic since they are already tagged for their separate traffic and on separate subnets/vlans.

Also, why should I add a another vSwitch if the upstream switches is the single point of failure? Some sites have a separate vSwitch for NFS but I don't see how that makes them redundant as the vSwitches never really fail anyways.

vmware has to have some detailed docs on this.
NIC teams or multipath provide redundancy, but if upstream there is no redundancy, then there is little point...

Every network organisation is different, and ESXi provides lots of flexibility to craft your own.

VMware does have documentation, but it's in basic form

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/virtual_networking_concepts.pdf

You need to establish, if you have enough bandwidth with 2 or 4 nics for everything.

Seperate vSwitches, seperates traffic, and iSCSI needs MPIO so a nic per VMKernel, sometimes it can be difficult to bung it all on a single vSwitch.

(because you need to use port overrides and binding)
ASKER CERTIFIED SOLUTION
Avatar of snyderkv
snyderkv

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial