• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2506
  • Last Modified:

Trunking of NetApp to ESXi hosts via Procurve switch

I would like some help with sanity checking my networking design for a new implementation of VMWare.  This design incorporates ESXi5.1, NetApp 3240 files using NFS and HP BL460c G7 blades using quad and dual port mezzanine cards to give a total of 8 NICs.

One vSwtich on VMWare will be configured with 2 NICs for the sole purpose of NFS for the presentation of NFS to the hosts.

From what I understand:
*             ESXi 5.1 cannot do LACP (with and Enterprise Plus license and a distributed switch).
*             The NetApp can trunk using LACP, Multimode VIFs or Single mode.

The switch we are using is a single 5406zl chassis with four 24 port modules.  I'm sure you've noticed that we're rather exposed in the event of a chassis failure but this is a risk we are prepared to bear.  It does have the advantantage of making trunking easier though as everything is going through one switch.

Now, my question is:
1.            Do we configure the NICs on the VM host side as a standard trunk (using 'route based on IP')?
2.            Should we configure these switch ports as standard trunk?
3.            Do we configure the vif on the NetApp side as LACP or a multimode trunk (bearing in mind the filer can do LACP but the VMHosts can't)?

Question 3 is probably the important on here.

Also,

Help appreciated.
Richard
0
incesupport
Asked:
incesupport
  • 5
  • 4
  • 2
  • +1
3 Solutions
 
IanThCommented:
0
 
incesupportAuthor Commented:
Yes, I saw this. It's unfortunate but we are not licensed for VDS.
If I was to summarise my question, it would be to ask whether the accepted solution is to use LACP on the NetApp end and a standard (non-LACP trunk) on the VMWare end - and configure the switch accordingly.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
I would recommend standard trunks and non-LACP which is HP special!
0
Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
FilipZahradnikCommented:
Port-channels exist between the switch and a connected device. Therefore you can have a non-LACP port-channel on the ESXi side and an LACP port-channel on the NetApp side.
0
 
incesupportAuthor Commented:
Thank you FilipZahradnik, that's exactly what I wanted to hear.

Also, thank you to hanccocka, take your point about HP switches being a little bit 'special', especially as I've been raised on Cisco.

I'll give this thread a few more days then dish out some points.
0
 
IanThCommented:
I thought vds just needed enterprise plus
0
 
incesupportAuthor Commented:
You are correct, I worded by initial question badly.  We don't have E+ licenses, so don't have this available to us.
0
 
FilipZahradnikCommented:
Just to expand my answer:

We use this setup (non-LACP port-channel on ESX side, LACP on NetApp side) all the time. When using NFS, 2 NICs and IP hash load balancing is all you need on the ESX side (the setup is a bit different for iSCSI). I can't help with HP configuration as I'm not familiar with HP switches, but on Cisco we use 'etherchannel mode on' on the ports connected to ESX and 'etherchannel mode lacp' on the ports connected to NetApp. On the NetApp, use ifgrp type lacp and loadbalancing policy ip in the ifgrp command.
0
 
incesupportAuthor Commented:
Excellent, the perfect answer.  Your setup sounds remarkably similar to our (with the exception of the networking side).  Did you have any other challenges or issues?

Depending on what you read, flow control isn't necessary in modern switches so can be switched off.

Are you using jumbo frames?
0
 
FilipZahradnikCommented:
Re challenges:
One inherent issue is that NFS at this point does not do multipath I/O. So at any given time, only one ESX NIC from each server will be used to connect to each NetApp controller.

Re Flow Control:
We leave flow control disabled in most circumstances.
TR-3749 NetApp Storage Best Practices for VMware vSphere (http://www.netapp.com/us/media/tr-3749.pdf, page 25) says:
"For modern network equipment, especially 10GbE equipment, NetApp recommends turning off flow control and allowing congestion management to be performed higher in the network stack."

Re Jumbo Frames:
We enable jumbo frames as a matter of policy, unless there are special circumstances. Make sure you test jumbo frames end to end by pinging the NFS interface on NetApp from each ESX host using 'vmkping -d -s 8900 <netapp_nfs_ip>'.
0
 
incesupportAuthor Commented:
Challenges:
Would this be the same at the filer end too?  Meaning only one NIC on the filer will be serving NFS?  or would we expect, as an example, vm host 1,2,3 to use one nic to filer nic1 and host 4,5,6 to use the other filer nic?  To clarify, if the filer has 2 nics serving NFS, will it be serving a total of 2Gb to the VM estate or 1Gb?

Flow Control:
Perfect, exactly what I thought.  Would you say that this holds true for good 1Gb adapters connected to a decent switch too?

Jumbo Frames:
Excellent.  Got caught out by this in the proof of concept.  After swapping out every cable, trying different switch ports etc, it suddenly hit me that I'd forgotten to set jumbo frames on the vm kernel - doh.
0
 
FilipZahradnikCommented:
Re challenges:
You are right, with the NetApp controller talking to multiple ESX hosts, IP hash loadbalancing should distribute those connections accross the controller NICs. Also, the ESX host should use different NIC for each NetApp controller, if you have more than one.

Re flow control:
I routinely leave flow control disabled on 1Gb equipment as well. As long as the switches are recent, you should be fine.

Re jumbo frames:
I always follow this KB when enabling jumbo frames on ESX:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1038827
On NetApp and on switches jumbo frames are pretty straight-forward.
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 5
  • 4
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now