snyderkv
asked on
Standard practice vSwitch design?
We have sites with inconsistent vSwitching configurations so I wanted to ask for a good vSwitching document that lays this out based on most setups in the field.
One layout we have is 4 vmnics going to two upstream Nexus switches.
vmnic0 and 2 going to Nexus A
vmnic1 and 3 going to Nexus B
vmnic0 and vmnic 1 connected to Mgmt, server port groups and vMotion.
vmnic 2 and vmnic 3 connected to Mgmt, Storage and vMotion.
It's all on vSwitch0. It just doesn't look right and theirs configuration drift among other hosts as well.
One layout we have is 4 vmnics going to two upstream Nexus switches.
vmnic0 and 2 going to Nexus A
vmnic1 and 3 going to Nexus B
vmnic0 and vmnic 1 connected to Mgmt, server port groups and vMotion.
vmnic 2 and vmnic 3 connected to Mgmt, Storage and vMotion.
It's all on vSwitch0. It just doesn't look right and theirs configuration drift among other hosts as well.
ASKER
Ok but what if you only have 4 nics? That's my situation. Management is already dedicated to Management traffic via check box and vMotion as well so perhaps they can both use the same two nics? Then put storage on its own two nics? Storage traffic is already on different subnets so that traffic is already separated.
So why not all 4 nics for all traffic since they are already tagged for their separate traffic and on separate subnets/vlans.
Also, why should I add a another vSwitch if the upstream switches is the single point of failure? Some sites have a separate vSwitch for NFS but I don't see how that makes them redundant as the vSwitches never really fail anyways.
vmware has to have some detailed docs on this.
So why not all 4 nics for all traffic since they are already tagged for their separate traffic and on separate subnets/vlans.
Also, why should I add a another vSwitch if the upstream switches is the single point of failure? Some sites have a separate vSwitch for NFS but I don't see how that makes them redundant as the vSwitches never really fail anyways.
vmware has to have some detailed docs on this.
NIC teams or multipath provide redundancy, but if upstream there is no redundancy, then there is little point...
Every network organisation is different, and ESXi provides lots of flexibility to craft your own.
VMware does have documentation, but it's in basic form
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/virtual_networking_concepts.pdf
You need to establish, if you have enough bandwidth with 2 or 4 nics for everything.
Seperate vSwitches, seperates traffic, and iSCSI needs MPIO so a nic per VMKernel, sometimes it can be difficult to bung it all on a single vSwitch.
(because you need to use port overrides and binding)
Every network organisation is different, and ESXi provides lots of flexibility to craft your own.
VMware does have documentation, but it's in basic form
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/virtual_networking_concepts.pdf
You need to establish, if you have enough bandwidth with 2 or 4 nics for everything.
Seperate vSwitches, seperates traffic, and iSCSI needs MPIO so a nic per VMKernel, sometimes it can be difficult to bung it all on a single vSwitch.
(because you need to use port overrides and binding)
ASKER
I found some best practices and standard designs that are much better than that outdated vsphere 3.0 doc.
https://wahlnetwork.com/2012/04/27/nfs-on-vsphere-technical-deep-dive-on-multiple-subnet-storage-traffic/
http://www.kendrickcoleman.com/index.php/Tech-Blog/vsphere-and-vcloud-host-10gb-nic-design-with-ucs-a-more.html
https://docs.vmware.com/en/VMware-Validated-Design/5.0.1/com.vmware.vvd.sddc-nsxt-design.doc/GUID-04E18B47-FE70-4200-8EC3-720F38B2016E.html
http://www.kendrickcoleman.com/index.php/Tech-Blog/vmware-vsphere-5-host-nic-network-design-layout-and-vswitch-configuration-major-update.html
https://wahlnetwork.com/2012/04/27/nfs-on-vsphere-technical-deep-dive-on-multiple-subnet-storage-traffic/
http://www.kendrickcoleman.com/index.php/Tech-Blog/vsphere-and-vcloud-host-10gb-nic-design-with-ucs-a-more.html
https://docs.vmware.com/en/VMware-Validated-Design/5.0.1/com.vmware.vvd.sddc-nsxt-design.doc/GUID-04E18B47-FE70-4200-8EC3-720F38B2016E.html
http://www.kendrickcoleman.com/index.php/Tech-Blog/vmware-vsphere-5-host-nic-network-design-layout-and-vswitch-configuration-major-update.html
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
e.g.
Management Network - 2 nics
vMotion - 2 nics
Storage - 2 nics
Virtual Machines - 2 nics
and then the teaming policy per 2 nics, need to be configured to match your physical switches.
Then, Management, vMotion, Storage should all be on sperate individual networks/subnets or VLANs - what ever is in use at your organisation.
You can then also spread vMotion, Storage, Management, Virtual Machines across vSwitches.
It's very flexible, and there is not really a right or wrong way. BUT it should be identical on ALL hosts to make it easier to document and rebuild.