All Trunks or Individual VLAN to each ESX host's NIC port?

I plan to connect 12 LAN patch cords from a pair of stacked Cisco
Access LAN switches to a pair of ESXi hosts.

I have 2 options:

Option 1:
a) 1 port per host to each switch's port that's assigned Management
    VLAN
b) 1 port per host to each switch's port that's assigned vMotion
    VLAN
c) the remaining 10 ports per server: 5 to each switch that has various
    data/prod/user access VLAN(s) assigned to the switch ports

Option 2:
a) all switch ports are trunked (with Management, vMotion, data/prod
    VLANs permitted to all ports on both the switches): so just need any
    6 ports on each host to each switch (as all ports on the switches are
    the same)

I think option 2 has better flexibility.

However, which of the above 2 options are better in terms of :
a) suppose 1 switch completely fails, I would like minimal downtime
    to the VMs.  I think with option 2, I've seen half of the VMs in each
    ESXi host will become unpingable for about 5 secs & then they
    become pingable again.  Any other configuration to even not to
    have this 5 secs downtime?

b) I think with Option 1, there could be an STP reconfiguration (can't
     recall the right term to use for this Spanning Tree Protocol
     that needs to recalculate itself) timing which could be 30secs?

c) which is a best practice among the 2 options & why

d) there's been a bug of ESXi 5.1 that caused crippling Unicast traffic
     flooding, so which of the 2 options can isolate/contain such issue better?
sunhuxAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

sunhuxAuthor Commented:
e) is VLAN tagging essential & which of the 2 options can
    support VLAN tagging for ESXi?

f) any LACP at the switches end needed & is LACP a good
   feature to have

I suppose for Option 2, we need to set each port to "trunk ... portfast
or Option 1 needs it as well?
0
sunhuxAuthor Commented:
g) Suppose we're using F5 load-balancer LTM in our environment,
    which of the 2 options is more suitable?
0
sunhuxAuthor Commented:
Note that our pair of F5 LTM will be physically connected to the same
Access switches: one LTM to each switch
0
Powerful Yet Easy-to-Use Network Monitoring

Identify excessive bandwidth utilization or unexpected application traffic with SolarWinds Bandwidth Analyzer Pack.

BusbarSolutions ArchitectCommented:
If the switches are stacked, if one failed you will still access the VLANs over the other switch but you will lose some BW.

my suggestion create a trunk on all the ports, create switch on individual ports (via ESXi), and configure vmotion, MGMT and prod traffic on separate switches.
0
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Use VLANs and reduce network cabling.

Create Trunks and run multiple VLANs over trunks, but be careful because running multiple VLANs inside trunks can be difficult to monitor performance on each individual VLAN.
0
KevinSeddon81Commented:
I would create a trunk between each physical nic on the esx hosts to the switch stack, with each trunk on a different switch within the stack.
Then, within the esx, create virtual networks associated with each vlan which are loadbalanced between each physical adapter.

That way, it is a tidyer setup, and then you have network redundancy from the loadbalanced network adapters and the multiple switches in the stack.

I also use virtualisd and physical F5 loadbalancers in my office, and their external interface is assigned to their own vlan on the virtual. That way, I can have the f5 go between the hosts of my esx cluster without having to dedicate physical adapters on my esx hosts. Our physical doesnt have this setup, and requires to be directly attached to the firewall, so it is no where near as flexible
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Mohammed KhawajaManager - Infrastructure:  Information TechnologyCommented:
My recommendation would be to create trunks for all ports.  Dedicate port 1 on each switch for management and port 2 as standby for management.  Dedicate port 2 for vmotion with port 1 for standby for vmotion.  Create a VDS with the other four to be used vm traffic.
0
sunhuxAuthor Commented:
Post closure query:

Will LACP help speed up the STP convergence (in the event one
of the switch is down) ?
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
VMware

From novice to tech pro — start learning today.