Solved

ESX Network architecture: virtual and physical network adapters

Posted on 2009-04-08
9
1,430 Views
Last Modified: 2012-05-06
Dear community

I'm planning an ESX environment and I've a question about the architecture of virtual and physical nic's.

There different Load Balancing options available. (Port based, Mac based, Has based)

1.) Port based
Relation: 1virtualAdapter : 1 physicalAdapter
This relation will not be changed till a failover occour.
This option does not provide load balancing of the amount of traffic because
each virtual machine can access only one physical network adapater at a given time.
Koncept: Ûse different physical switches to eliminate a single point of failure
The vswitch policy is best used when the number of virtual network adapters is greater than the nr. of phys adapters.


2.) Source Mac based
Koncept: same as port based but on the MAC adress.


3.) IP Hash based
Adresses the limitation of the port and mac load balancing. A virtual machine with 1 virt. network adapter
can access to multiple physical network adapters. Allos a single virtual machine to communicate over diff. phys adapters
when communicating with different locations.
Limitation: physical nics must be on the same phys switch.


===========

now my question:
In the backend, I've two redundant physical switches.
What's the best way to achieve highest performance (i.e. backup with backupexec agents into the virt. machiens, vranger backup trough network, ...) and load balance for my virtual machines?

Is there a possible way to create two virtual switches, gives each virtual machine two virtual nics by which each was connect to one virtual switch.

Are there any recommendation to achieve a good performance?
Any suggestions to choose a load-balancing option?

thanks for the comment,
best regards
A.F.


0
Comment
Question by:olmu
9 Comments
 
LVL 1

Accepted Solution

by:
jnicpon earned 200 total points
Comment Utility
There are several questions being asked here... First, the more NICs per host that you can afford to implement the better. Try to adhere to best practices by reserving dedicated NICs for storage operations like NFS, iSCSI, vMotion, and Administrative Traffic. As far as NIC teaming, your third option is the best and is very convenient when you utilize VLAN tagging for Virtual Machine placement. You'll benefit from improved redundancy, as well as load balancing.

As far as your question related to backup traffic, it depends largely on what you are trying to backup (bare virtual metal, individual files, etc) and its relative size. I would recommend looking configuring vRanger to leverage VMware Consolidated Backup (VCB)  and setting a up a physical backup proxy if cost permits. This will allow you to spare your NICs and keep the colossal backup traffic within your SAN. Then you can target the backup proxy with vRanger accessories or Backup Exec.

Hope this helps...
0
 
LVL 21

Assisted Solution

by:za_mkh
za_mkh earned 220 total points
Comment Utility
What switching environment do you have? Are your switches stacked? (guess they do)
We use Nortel switches and using MLT links between our ESX server and our switch stack we connect 4-6 Nics to different physical switches for redundancy using IP Hash. Works well, but thats due to Nortel technology. Cisco have a similar thing called Etherchannel - which is I guess the same Link aggregration technology.
0
 

Author Comment

by:olmu
Comment Utility
Dear jnicpon
Dear zamkh
Thanks for answering.


I will give you more information;
In a first part, we plan the following architecture:
- 2 exs hosts, each 12ghz, 16gbram, 6 physical NICs, local harddrives for all images - raid5 with 8disks/15k
- 1 vcenter server
- Backup solution = vRanger (backup traffic trough network), installed on the vCenter server
- We've 2 stacked backbone switches, HP switches (connected with 10GB)
- We will migrate at first around 6 physical machines to a virtual environment
- We will not use any vlan, everything in the same lan segment

(we will beginn with a foundation solution and will expand in the next year to a san environment,
no budget for a san fiber/iscsi at this time)


Today, we've hp servers in use - load balanced with the hp teaming mgmt component.

NICs:
- I will create at least 2 service console connectors
  1 with a dedicated physical NIC adapter, one in a vswitch (where the vms will work) for backup - if the other physical fail



I agree with your suggestions to use vcb with a san and vranger...hears fine..but budget you now :-)


So what you mean about?
What for a nic configuration would you prefer - and why?

thanks guys,
great

Best regards
A.F.





0
 
LVL 21

Assisted Solution

by:za_mkh
za_mkh earned 220 total points
Comment Utility
In your case, I would configure maybe like follows
vmnic0/1 : SC Port
vmnic2-4: VM Guests
That gives you the redundancy you will need.
I hope these link helps you with configuration options for your procurve switches:
http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
http://jeremywaldrop.wordpress.com/2008/10/26/vlan-trunking-with-vmware-esx-and-hp-procurve-switches/
http://communities.vmware.com/thread/115663
http://cdn.procurve.com/training/Manuals/3500-5400-6200-8200-MCG-Jan08-12-PortTrunk.pdf
 
 
0
Better Security Awareness With Threat Intelligence

See how one of the leading financial services organizations uses Recorded Future as part of a holistic threat intelligence program to promote security awareness and proactively and efficiently identify threats.

 

Author Comment

by:olmu
Comment Utility
Dear za_mkh
Thanks for your answer and your comment.

I agree with your config to create 0/1 for sc and 2-4 for guests.
Do you suggest to create just 2vswitchs for the two scports  and one vswitch with 4 nics for the guests?
Or more splitted?

I've checked your links;

In this one:
http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/

"configure the VMware ESX vSwitchs load balancing policy to Route based on ip hash."

Hash based means, all physical nics must be connected to the same physical switch.
So - where you will have here a redundancy by physical switches?
Is there a possibility to two connect from the guest nics to one hp switch, two to the other one and trunk them?
Or how's the procedure here?

I've not yet much experience with the hp switching concept, hope you can clarify my questions.

Thank you very much,
best regards

0
 
LVL 7

Assisted Solution

by:kumarnirmal
kumarnirmal earned 80 total points
Comment Utility
Another alternative to Service Console redundancy is to configure 2 vSwitches and connect 2 Physical NICs to one another instead of teaming 2 Physical NICs to a single vSwitch which can act as a Single Point of Failure if the Service Console subnet has some network connectivity issues.
0
 

Author Comment

by:olmu
Comment Utility
Dear kumarnirmal,
Thanks, I agree with your solution so far.
2 vswitches, each with a physical nic, each connected to annother physical switch. So I will have the full redundancy.
What's about the to integrate one serviceconsole to the common vswitch as the guest uses?
That mean, 1vswitch with 1 dedicated phys nic for serviceconsole1 | 1vswitch with the rest of phys nics (5piec) and one serviceconsole2 ?
So I don't must waste one physical nic just for a serviceconsole2 and have still redundancy? What you mean about?
ps> in the backend, I've two HPProcurve 5406z switches.
 
thanks, best regards
0
 

Author Comment

by:olmu
Comment Utility
Dear community
I've try to create a redundant nic architecture for esx.
What you think about? Is it operational? Would it work?

I want to eliminate the single point of failure by a phys NIC, a phys switch or a service console.

==========
Loadbalancing-
- IP Hash based (one virtual nic will use multiple physical nics, source-destination load balancing)
- HP Procurve Switches Trunking Group and trunk over the 4ports (over2 switches)
  Trunktype= Trunke, LACP=Disabled, Flowcontrol=Disabled, Mode=Auto

Please take a look on the picture.
 
Open questions
- Is it possible to trunk over two switches? In my scenario, I should trunk 4 Ports
 
Thanks for your guide,
Best regards
 

Pics1.jpg
0
 

Author Comment

by:olmu
Comment Utility
Does anyone has an idea?
0

Featured Post

Complete VMware vSphere® ESX(i) & Hyper-V Backup

Capture your entire system, including the host, with patented disk imaging integrated with VMware VADP / Microsoft VSS and RCT. RTOs is as low as 15 seconds with Acronis Active Restore™. You can enjoy unlimited P2V/V2V migrations from any source (even from a different hypervisor)

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
ESX Ip address 10 51
register vmware inventory not succeded 4 49
Vmware View Datastore 6 47
esxi Dump collection on remote server? 9 50
This Tutorial covers a very basic and common question asked on Experts Exchange, "How Do I Clone or Copy a virtual machine in VMware vSphere Hypervisor ESX/ESXi 4.x, ESXi 5.0?" Using the following method, no third party tools are required or need…
This is an issue that we can get adding / removing permissions in the vCSA 6.0. We can also have issues searching for users / groups in the AD (using your identify sources). This is how one of the ways to handle this issues and fix it.
Teach the user how to rename, unmount, delete and upgrade VMFS datastores. Open vSphere Web Client: Rename VMFS and NFS datastores: Upgrade VMFS-3 volume to VMFS-5: Unmount VMFS datastore: Delete a VMFS datastore:
This Micro Tutorial walks you through using a remote console to access a server and install ESXi 5.1. This example is showing remote access and installation using a Dell server. The hypervisor is the very first component of your virtual infrastructu…

762 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

12 Experts available now in Live!

Get 1:1 Help Now