Solved

ESX Servers Uplink ports

Posted on 2014-01-12
8
650 Views
Last Modified: 2014-01-14
Most of ESX servers that I have seen so far, have only 2 Uplink ports (physical Nics) that connect to a physical switch. those 2 Uplinks will carry traffic for Vmkernel, Vm port group, may be more….


I know that there are ESX servers that have many Uplinks, that way Admins can dedicate each port for some traffic (Vmkernel, Vm port group,) they can dedicate specific Vlans, to go through specific Uplink ports.

However I have not seen any ESX with multiple uplink ports. Can someone give me an example of an ESX server with multiple uplink ports, a link with pictures will be much more helpful.

Thank you
0
Comment
Question by:jskfan
  • 2
  • 2
  • 2
  • +2
8 Comments
 
LVL 33

Assisted Solution

by:Busbar
Busbar earned 166 total points
ID: 39774598
most of my servers have more than 2 uplinks, this will be required to separate huge traffic like backups from other traffic like productions.

I also have seen it with 10G/1G NICs, where 1 G uplinks NICs are dedicated for management/monitoring where us 10G to carry production traffic.
0
 
LVL 118

Accepted Solution

by:
Andrew Hancock (VMware vExpert / EE MVE) earned 167 total points
ID: 39774606
In the following screenshot, 10 Network Interface Uplink Ports provide, Management, Virtual Machine Network, VMKernel (for iSCSI) and FT.

see here:-

Multiple Ports
It's recommended to split your services across multiple switches and multiple ports.

also see these articles, which is the recommended way and best practice to setup iSCSI storage

HOW TO: Add an iSCSI Software Adaptor and Create an iSCSI Multipath Network in VMware vSphere Hypervisor ESXi 5.0
0
 
LVL 12

Expert Comment

by:Vaseem Mohammed
ID: 39774626
Perfect screenshot!.

@Andrew Hancock
Salute to your expertise.
0
 
LVL 55

Expert Comment

by:andyalder
ID: 39774639
Only production VMware servers I've seen with just 2 physical NIC ports have been blades using Flex Connect or something similar that splits the physical ports into multiple logical ports so in effect they've really had 8 NICs as far as VMware was concerned. Highest number of NIC ports I've seen used was 17 but that installation had a strange security model that didn't allow VLANs.
0
What Should I Do With This Threat Intelligence?

Are you wondering if you actually need threat intelligence? The answer is yes. We explain the basics for creating useful threat intelligence.

 

Author Comment

by:jskfan
ID: 39774655
Does DELL OR HP  or other hardware vendors have ESX server hardware  with multiple physical network adapters ?

Standard Vswitch is virtual switch that can be created in each ESX host, and it uses the available physical NICs as Uplink ports to the network switch…
Distributed Vswitch is virtual switch that can be created at the data center level, and can use all physical adapters of all ESX servers as Uplink ports to the Network switch…however it is still not enough physical Nics seeing that each ESX has its own VM port groups,VMkernels,etc….


What I was looking for if there is an ESX server with multiple physical Adapters…
 I believe there should be a balance between Memory/CPU and Network speed…
 it will not be helpful if we have an ESX with huge capacity of RAM/CPU but only a couple Nics with  1 gb or 10 GB each…
Also, if we need to dedicate an uplink port to specific VLANs
0
 
LVL 118
ID: 39774683
Does DELL OR HP  or other hardware vendors have ESX server hardware  with multiple physical network adapters ?
Yes, we just purchased HP and Dell servers, (DL360) and (R720) with 12x1GBe NICs. We could have added more, if we wished.

We could have ordered 10GBe nics, if we wanted.


If using 1GB NICs, as per VMware Recommendations you would need more than 2! (if using vMotion and Network Storage).

If using 10GB NICS, you could use 2 NICs, and then use VLANs to seperate the traffic, if you use VLANs.

If you are only using two 1GB nics, in your ESXi servers, it's a poor design! (for all the requiired traffic even with VLANs)
0
 
LVL 55

Assisted Solution

by:andyalder
andyalder earned 167 total points
ID: 39774720
You get 4 * 1Gb NIC ports on the motherboard with most servers nowadays since the old 2 port ones were never enough, just look at the network configuration of some of the VMware benchmark machines - 4*1Gb + 2*10Gb + 2*16Gb for SAN for a 2U 2P box. Unless you're really space constricted I'd go for 2U rather than 1U boxes, if not you may regret the lack of PCI slots for expansion.
0
 

Author Closing Comment

by:jskfan
ID: 39781185
Thank you Guys!
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

When we have a dead host and we lose all connections to the ESXi, and we need to find a way to move all VMs from that dead ESXi host.
In this article, I show you step by step with screenshots to assist you - HOW TO: Deploy and Install the VMware vCenter Server Appliance 6.5 (VCSA 6.5), with some helpful tips along the way.
This Micro Tutorial walks you through using a remote console to access a server and install ESXi 5.1. This example is showing remote access and installation using a Dell server. The hypervisor is the very first component of your virtual infrastructu…
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…

743 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

11 Experts available now in Live!

Get 1:1 Help Now