Solved

Trunking of NetApp to ESXi hosts via Procurve switch

Posted on 2013-01-17
12
2,318 Views
Last Modified: 2013-01-21
I would like some help with sanity checking my networking design for a new implementation of VMWare.  This design incorporates ESXi5.1, NetApp 3240 files using NFS and HP BL460c G7 blades using quad and dual port mezzanine cards to give a total of 8 NICs.

One vSwtich on VMWare will be configured with 2 NICs for the sole purpose of NFS for the presentation of NFS to the hosts.

From what I understand:
*             ESXi 5.1 cannot do LACP (with and Enterprise Plus license and a distributed switch).
*             The NetApp can trunk using LACP, Multimode VIFs or Single mode.

The switch we are using is a single 5406zl chassis with four 24 port modules.  I'm sure you've noticed that we're rather exposed in the event of a chassis failure but this is a risk we are prepared to bear.  It does have the advantantage of making trunking easier though as everything is going through one switch.

Now, my question is:
1.            Do we configure the NICs on the VM host side as a standard trunk (using 'route based on IP')?
2.            Should we configure these switch ports as standard trunk?
3.            Do we configure the vif on the NetApp side as LACP or a multimode trunk (bearing in mind the filer can do LACP but the VMHosts can't)?

Question 3 is probably the important on here.

Also,

Help appreciated.
Richard
0
Comment
Question by:incesupport
  • 5
  • 4
  • 2
  • +1
12 Comments
 
LVL 30

Assisted Solution

by:IanTh
IanTh earned 50 total points
ID: 38788032
0
 

Author Comment

by:incesupport
ID: 38788196
Yes, I saw this. It's unfortunate but we are not licensed for VDS.
If I was to summarise my question, it would be to ask whether the accepted solution is to use LACP on the NetApp end and a standard (non-LACP trunk) on the VMWare end - and configure the switch accordingly.
0
 
LVL 117

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE)
Andrew Hancock (VMware vExpert / EE MVE) earned 50 total points
ID: 38788514
I would recommend standard trunks and non-LACP which is HP special!
0
 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38789514
Port-channels exist between the switch and a connected device. Therefore you can have a non-LACP port-channel on the ESXi side and an LACP port-channel on the NetApp side.
0
 

Author Comment

by:incesupport
ID: 38792868
Thank you FilipZahradnik, that's exactly what I wanted to hear.

Also, thank you to hanccocka, take your point about HP switches being a little bit 'special', especially as I've been raised on Cisco.

I'll give this thread a few more days then dish out some points.
0
 
LVL 30

Expert Comment

by:IanTh
ID: 38792910
I thought vds just needed enterprise plus
0
Highfive Gives IT Their Time Back

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

 

Author Comment

by:incesupport
ID: 38792982
You are correct, I worded by initial question badly.  We don't have E+ licenses, so don't have this available to us.
0
 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38795790
Just to expand my answer:

We use this setup (non-LACP port-channel on ESX side, LACP on NetApp side) all the time. When using NFS, 2 NICs and IP hash load balancing is all you need on the ESX side (the setup is a bit different for iSCSI). I can't help with HP configuration as I'm not familiar with HP switches, but on Cisco we use 'etherchannel mode on' on the ports connected to ESX and 'etherchannel mode lacp' on the ports connected to NetApp. On the NetApp, use ifgrp type lacp and loadbalancing policy ip in the ifgrp command.
0
 

Author Comment

by:incesupport
ID: 38797331
Excellent, the perfect answer.  Your setup sounds remarkably similar to our (with the exception of the networking side).  Did you have any other challenges or issues?

Depending on what you read, flow control isn't necessary in modern switches so can be switched off.

Are you using jumbo frames?
0
 
LVL 9

Accepted Solution

by:
FilipZahradnik earned 400 total points
ID: 38797581
Re challenges:
One inherent issue is that NFS at this point does not do multipath I/O. So at any given time, only one ESX NIC from each server will be used to connect to each NetApp controller.

Re Flow Control:
We leave flow control disabled in most circumstances.
TR-3749 NetApp Storage Best Practices for VMware vSphere (http://www.netapp.com/us/media/tr-3749.pdf, page 25) says:
"For modern network equipment, especially 10GbE equipment, NetApp recommends turning off flow control and allowing congestion management to be performed higher in the network stack."

Re Jumbo Frames:
We enable jumbo frames as a matter of policy, unless there are special circumstances. Make sure you test jumbo frames end to end by pinging the NFS interface on NetApp from each ESX host using 'vmkping -d -s 8900 <netapp_nfs_ip>'.
0
 

Author Comment

by:incesupport
ID: 38798004
Challenges:
Would this be the same at the filer end too?  Meaning only one NIC on the filer will be serving NFS?  or would we expect, as an example, vm host 1,2,3 to use one nic to filer nic1 and host 4,5,6 to use the other filer nic?  To clarify, if the filer has 2 nics serving NFS, will it be serving a total of 2Gb to the VM estate or 1Gb?

Flow Control:
Perfect, exactly what I thought.  Would you say that this holds true for good 1Gb adapters connected to a decent switch too?

Jumbo Frames:
Excellent.  Got caught out by this in the proof of concept.  After swapping out every cable, trying different switch ports etc, it suddenly hit me that I'd forgotten to set jumbo frames on the vm kernel - doh.
0
 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38802777
Re challenges:
You are right, with the NetApp controller talking to multiple ESX hosts, IP hash loadbalancing should distribute those connections accross the controller NICs. Also, the ESX host should use different NIC for each NetApp controller, if you have more than one.

Re flow control:
I routinely leave flow control disabled on 1Gb equipment as well. As long as the switches are recent, you should be fine.

Re jumbo frames:
I always follow this KB when enabling jumbo frames on ESX:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1038827
On NetApp and on switches jumbo frames are pretty straight-forward.
0

Featured Post

6 Surprising Benefits of Threat Intelligence

All sorts of threat intelligence is available on the web. Intelligence you can learn from, and use to anticipate and prepare for future attacks.

Join & Write a Comment

Exchange server is not supported in any cloud-hosted platform (other than Azure with Azure Premium Storage).
In this article, I will show you HOW TO: Perform a Physical to Virtual (P2V) Conversion the easy way from a computer backup (image).
Teach the user how to configure vSphere Replication and how to protect and recover VMs Open vSphere Web Client: Verify vsphere Replication is enabled: Enable vSphere Replication for a virtual machine: Verify replicated VM is created: Recover replica…
This tutorial will walk an individual through the process of installing the necessary services and then configuring a Windows Server 2012 system as an iSCSI target. To install the necessary roles, go to Server Manager, and select Add Roles and Featu…

758 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

25 Experts available now in Live!

Get 1:1 Help Now