Solved

Management Interface of ESXi Host

Posted on 2016-10-18
13
63 Views
Last Modified: 2016-10-22
Management Interface of ESXi Host
in the scenario where I have 2 physical Network Adapters on the ESXi host.
I will go to DCUI and assign Ip address 192.168.1.10 to ESXi Host.Then I will team up both Physical Adapters. on the Vswitch all VM Ports as well as VMkernel ports will use the 2 Physical Nics as one Teamed Nic.
So The 2 physical Nics will act as one Trunk Port . I mean Whatever Vlan the VMs are on, should be accessed from the other side of the Network (Assuming the configuration on the physical Switches is setup properly).
Now when accessing the Managemnt Interface of the ESXi host, will that be just like accessing the VM on the ESXi host? I mean the connection will still go through the Trunk port (the Teamed up Nics)? or shoudl I dedicated one Physical NIC, just for the Management Interface of the ESXi host ?

Thank you
0
Comment
Question by:jskfan
13 Comments
 
LVL 11

Assisted Solution

by:Mr Tortur
Mr Tortur earned 150 total points
ID: 41848272
Hi,
is there a reason why you did it this way, with a team? Identified bandwidth needs?

Because you could choose an easier way :
Put both nics in the same vswitch
Dedicate the ESXi management (vmkernel) on NIC1, and put NIC2 in standby for this vmkernel
Dedicate the VM network (VM port Group) on NIC2, and put NIC1 in standby mode for this vmport group
And set vlan as you need.
Done, you will have redundancy.. which is the main requirements in production.

But if you need more bandwidth maybe you could do, I don't know but be carefull. For example I know LACP is not supported on ESXi 5 and 6 with standard vswitches, only dvs are supported :
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001938
0
 
LVL 118

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE)
Andrew Hancock (VMware vExpert / EE MVE) earned 350 total points
ID: 41848340
Now when accessing the Managemnt Interface of the ESXi host, will that be just like accessing the VM on the ESXi host? I mean the connection will still go through the Trunk port (the Teamed up Nics)? or shoudl I dedicated one Physical NIC, just for the Management Interface of the ESXi host ?

It makes common IT sense, to reduce single ports of failure, and have at least two physical network interfaces fr your Management Interface.

So a trunk port satisfies this requirement.

It's usual to dedicate a vSwitch and 2 NICS for Management Interface, and VM vSwitch for VMs.

see here

10NICs-with-annotations---.jpg
This way you isolate all VM traffic from ESXi Management Interfaces, and can have different Load Balancing Profiles for Management Interface and VMs.
0
 
LVL 11

Expert Comment

by:loftyworm
ID: 41848935
I agree with Mr.Hancock, this is the way we have done it.  I will only add that we did this is large part not for performance, but security, to truly isolate the management side from public access.

But I do like mrtortur idea of active-passive on one and passive-active on the other.  I had not thought of that, learn something new every day :)

My ¢2
0
 

Author Comment

by:jskfan
ID: 41849233
mrtortur

If you Team up the Physical Nics, and one of them Fails, then I believe you still have redundancy. correct ?
So, if you put VM port group as well as VMkernel port group, to go through the Teamed Nics..there still should be redundancy,  you just do not get separation of traffic...
Assuming there is budget restriction and the company cannot afford more than 2 physical Nics by ESX host (:-)..
The Teamed up NICs will still work OK
0
 
LVL 118
ID: 41849242
If you Team up the Physical Nics, and one of them Fails, then I believe you still have redundancy. correct ?

if you have two nics, and one fails, how do you have any redundancy ? 2-1 = 1 ? 1 nic left!

if that nic fails, you no longer have any contact to the Management Network, which maybe okay, because you could access the console via iLo/iDrac/IPM etc

If you have budget restrictions, then you have to lose resilience and redundancy.

But most Production servers come with at least 2 network interfaces at present, and some with four. (4).

You have to make best use for your organisation.
0
 

Author Comment

by:jskfan
ID: 41849769
Andrew
If you Team up the Physical Nics, and one of them Fails, then I believe you still have redundancy. correct ?

The Nics are teamed up. if one fails, the other should still work until you replace the failed one...
If so why would you go with Standby..

I guess What mrtortur was referring to in his comment, is using one Active and one Standby for some traffic and Vice-versa for other type of traffic, This will help to separate traffic types, and at the same time, if one NIC fails , the other Nic will still carry all Traffic unseparated of course.
0
How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

 
LVL 118
ID: 41849782
The Nics are teamed up. if one fails, the other should still work until you replace the failed one...
If so why would you go with Standby..

Correct. The majority of our clients, do not see the benefit or worth, and cost of keeping a port in standby, if you've gone to the trouble of cabling it up, connecting it and configure a physical switch port.

You may as well, have both Active/Active, to give you more bandwidth and resilience at the same time.

Most of our clients, if not all do this.

Active/Standby was used many years ago, (circa 2000), when physical switch configurations were not as advanced, or you could not trunk ports on your physical switches, again many options available for different network environments.

and considering the cost of 10GBe nics, you would never waste one in standby!
0
 

Author Comment

by:jskfan
ID: 41849825
By the way the Nic teaming configuration on ESX..is it something you configure with the ESX Vendor teaming software as well as within Vsphere client (you make both Nics Active)

OR

Just within Vsphere client only (you make both Nics Active)
0
 
LVL 118

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE) earned 350 total points
ID: 41849832
All configurations for Teaming Policy are configured using the client, which connects to the host, either vSphere Client (legacy) or Web Client.

(at present there is no additional vendor based teaming software for ESXi, it's part of the ESXi OS and configuration).

Remember Teaming Policy only controls traffic leaving the host. (outbound)

and then further configuration on the physical switches. (inbound traffic).
0
 

Author Comment

by:jskfan
ID: 41850518
and then further configuration on the physical switches. (inbound traffic).
I believe Port Aggregation (Etherchannel) on the Physical Switch Trunk Ports should do the job
0
 
LVL 118
ID: 41850585
Yes, that is a function that can do the job.
0
 

Author Closing Comment

by:jskfan
ID: 41855132
Thank you
0
 
LVL 118
ID: 41855181
no problems
0

Featured Post

What Should I Do With This Threat Intelligence?

Are you wondering if you actually need threat intelligence? The answer is yes. We explain the basics for creating useful threat intelligence.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
Cisco 1830 AP behaving wierdly 7 28
Use of Training Budget 12 71
Desktop Pools and Vms 4 30
ASR920 switching 2 15
David Varnum recently wrote up his impressions of PRTG, based on a presentation by my colleague Christian at Tech Field Day at VMworld in Barcelona. Thanks David, for your detailed and honest evaluation!
HOW TO: Upload an ISO image to a VMware datastore for use with VMware vSphere Hypervisor 6.5 (ESXi 6.5) using the vSphere Host Client, and checking its MD5 checksum signature is correct.  It's a good idea to compare checksums, because many installat…
Teach the user how to rename, unmount, delete and upgrade VMFS datastores. Open vSphere Web Client: Rename VMFS and NFS datastores: Upgrade VMFS-3 volume to VMFS-5: Unmount VMFS datastore: Delete a VMFS datastore:
Teach the user how to configure vSphere Replication and how to protect and recover VMs Open vSphere Web Client: Verify vsphere Replication is enabled: Enable vSphere Replication for a virtual machine: Verify replicated VM is created: Recover replica…

744 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now