Solved

ESX Network architecture: virtual and physical network adapters

Posted on 2009-04-08
9
1,431 Views
Last Modified: 2012-05-06
Dear community

I'm planning an ESX environment and I've a question about the architecture of virtual and physical nic's.

There different Load Balancing options available. (Port based, Mac based, Has based)

1.) Port based
Relation: 1virtualAdapter : 1 physicalAdapter
This relation will not be changed till a failover occour.
This option does not provide load balancing of the amount of traffic because
each virtual machine can access only one physical network adapater at a given time.
Koncept: Ûse different physical switches to eliminate a single point of failure
The vswitch policy is best used when the number of virtual network adapters is greater than the nr. of phys adapters.


2.) Source Mac based
Koncept: same as port based but on the MAC adress.


3.) IP Hash based
Adresses the limitation of the port and mac load balancing. A virtual machine with 1 virt. network adapter
can access to multiple physical network adapters. Allos a single virtual machine to communicate over diff. phys adapters
when communicating with different locations.
Limitation: physical nics must be on the same phys switch.


===========

now my question:
In the backend, I've two redundant physical switches.
What's the best way to achieve highest performance (i.e. backup with backupexec agents into the virt. machiens, vranger backup trough network, ...) and load balance for my virtual machines?

Is there a possible way to create two virtual switches, gives each virtual machine two virtual nics by which each was connect to one virtual switch.

Are there any recommendation to achieve a good performance?
Any suggestions to choose a load-balancing option?

thanks for the comment,
best regards
A.F.


0
Comment
Question by:olmu
9 Comments
 
LVL 1

Accepted Solution

by:
jnicpon earned 200 total points
ID: 24098615
There are several questions being asked here... First, the more NICs per host that you can afford to implement the better. Try to adhere to best practices by reserving dedicated NICs for storage operations like NFS, iSCSI, vMotion, and Administrative Traffic. As far as NIC teaming, your third option is the best and is very convenient when you utilize VLAN tagging for Virtual Machine placement. You'll benefit from improved redundancy, as well as load balancing.

As far as your question related to backup traffic, it depends largely on what you are trying to backup (bare virtual metal, individual files, etc) and its relative size. I would recommend looking configuring vRanger to leverage VMware Consolidated Backup (VCB)  and setting a up a physical backup proxy if cost permits. This will allow you to spare your NICs and keep the colossal backup traffic within your SAN. Then you can target the backup proxy with vRanger accessories or Backup Exec.

Hope this helps...
0
 
LVL 21

Assisted Solution

by:za_mkh
za_mkh earned 220 total points
ID: 24099570
What switching environment do you have? Are your switches stacked? (guess they do)
We use Nortel switches and using MLT links between our ESX server and our switch stack we connect 4-6 Nics to different physical switches for redundancy using IP Hash. Works well, but thats due to Nortel technology. Cisco have a similar thing called Etherchannel - which is I guess the same Link aggregration technology.
0
 

Author Comment

by:olmu
ID: 24100133
Dear jnicpon
Dear zamkh
Thanks for answering.


I will give you more information;
In a first part, we plan the following architecture:
- 2 exs hosts, each 12ghz, 16gbram, 6 physical NICs, local harddrives for all images - raid5 with 8disks/15k
- 1 vcenter server
- Backup solution = vRanger (backup traffic trough network), installed on the vCenter server
- We've 2 stacked backbone switches, HP switches (connected with 10GB)
- We will migrate at first around 6 physical machines to a virtual environment
- We will not use any vlan, everything in the same lan segment

(we will beginn with a foundation solution and will expand in the next year to a san environment,
no budget for a san fiber/iscsi at this time)


Today, we've hp servers in use - load balanced with the hp teaming mgmt component.

NICs:
- I will create at least 2 service console connectors
  1 with a dedicated physical NIC adapter, one in a vswitch (where the vms will work) for backup - if the other physical fail



I agree with your suggestions to use vcb with a san and vranger...hears fine..but budget you now :-)


So what you mean about?
What for a nic configuration would you prefer - and why?

thanks guys,
great

Best regards
A.F.





0
 
LVL 21

Assisted Solution

by:za_mkh
za_mkh earned 220 total points
ID: 24101866
In your case, I would configure maybe like follows
vmnic0/1 : SC Port
vmnic2-4: VM Guests
That gives you the redundancy you will need.
I hope these link helps you with configuration options for your procurve switches:
http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/
http://jeremywaldrop.wordpress.com/2008/10/26/vlan-trunking-with-vmware-esx-and-hp-procurve-switches/
http://communities.vmware.com/thread/115663
http://cdn.procurve.com/training/Manuals/3500-5400-6200-8200-MCG-Jan08-12-PortTrunk.pdf
 
 
0
Microsoft Certification Exam 74-409

Veeam® is happy to provide the Microsoft community with a study guide prepared by MVP and MCT, Orin Thomas. This guide will take you through each of the exam objectives, helping you to prepare for and pass the examination.

 

Author Comment

by:olmu
ID: 24102126
Dear za_mkh
Thanks for your answer and your comment.

I agree with your config to create 0/1 for sc and 2-4 for guests.
Do you suggest to create just 2vswitchs for the two scports  and one vswitch with 4 nics for the guests?
Or more splitted?

I've checked your links;

In this one:
http://blog.scottlowe.org/2008/09/05/vmware-esx-nic-teaming-and-vlan-trunking-with-hp-procurve/

"configure the VMware ESX vSwitchs load balancing policy to Route based on ip hash."

Hash based means, all physical nics must be connected to the same physical switch.
So - where you will have here a redundancy by physical switches?
Is there a possibility to two connect from the guest nics to one hp switch, two to the other one and trunk them?
Or how's the procedure here?

I've not yet much experience with the hp switching concept, hope you can clarify my questions.

Thank you very much,
best regards

0
 
LVL 7

Assisted Solution

by:kumarnirmal
kumarnirmal earned 80 total points
ID: 24103273
Another alternative to Service Console redundancy is to configure 2 vSwitches and connect 2 Physical NICs to one another instead of teaming 2 Physical NICs to a single vSwitch which can act as a Single Point of Failure if the Service Console subnet has some network connectivity issues.
0
 

Author Comment

by:olmu
ID: 24104350
Dear kumarnirmal,
Thanks, I agree with your solution so far.
2 vswitches, each with a physical nic, each connected to annother physical switch. So I will have the full redundancy.
What's about the to integrate one serviceconsole to the common vswitch as the guest uses?
That mean, 1vswitch with 1 dedicated phys nic for serviceconsole1 | 1vswitch with the rest of phys nics (5piec) and one serviceconsole2 ?
So I don't must waste one physical nic just for a serviceconsole2 and have still redundancy? What you mean about?
ps> in the backend, I've two HPProcurve 5406z switches.
 
thanks, best regards
0
 

Author Comment

by:olmu
ID: 24105809
Dear community
I've try to create a redundant nic architecture for esx.
What you think about? Is it operational? Would it work?

I want to eliminate the single point of failure by a phys NIC, a phys switch or a service console.

==========
Loadbalancing-
- IP Hash based (one virtual nic will use multiple physical nics, source-destination load balancing)
- HP Procurve Switches Trunking Group and trunk over the 4ports (over2 switches)
  Trunktype= Trunke, LACP=Disabled, Flowcontrol=Disabled, Mode=Auto

Please take a look on the picture.
 
Open questions
- Is it possible to trunk over two switches? In my scenario, I should trunk 4 Ports
 
Thanks for your guide,
Best regards
 

Pics1.jpg
0
 

Author Comment

by:olmu
ID: 24114385
Does anyone has an idea?
0

Featured Post

Control application downtime with dependency maps

Visualize the interdependencies between application components better with Applications Manager's automated application discovery and dependency mapping feature. Resolve performance issues faster by quickly isolating problematic components.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

David Varnum recently wrote up his impressions of PRTG, based on a presentation by my colleague Christian at Tech Field Day at VMworld in Barcelona. Thanks David, for your detailed and honest evaluation!
HOW TO: Connect to the VMware vSphere Hypervisor 6.5 (ESXi 6.5) using the vSphere (HTML5 Web) Host Client 6.5, and perform a simple configuration task of adding a new VMFS 6 datastore.
Teach the user how to convert virtaul disk file formats and how to rename virtual machine files on datastores. Open vSphere Web Client: Review VM disk settings: Migrate VM to new datastore with a thick provisioned (lazy zeroed) disk format: Rename a…
Teach the user how to edit .vmx files to add advanced configuration options Open vSphere Web Client: Edit Settings for a VM: Choose VM Options -> Advanced: Add Configuration Parameters:

948 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

22 Experts available now in Live!

Get 1:1 Help Now