Solved

Trunking of NetApp to ESXi hosts via Procurve switch

Posted on 2013-01-17
12
2,424 Views
Last Modified: 2013-01-21
I would like some help with sanity checking my networking design for a new implementation of VMWare.  This design incorporates ESXi5.1, NetApp 3240 files using NFS and HP BL460c G7 blades using quad and dual port mezzanine cards to give a total of 8 NICs.

One vSwtich on VMWare will be configured with 2 NICs for the sole purpose of NFS for the presentation of NFS to the hosts.

From what I understand:
*             ESXi 5.1 cannot do LACP (with and Enterprise Plus license and a distributed switch).
*             The NetApp can trunk using LACP, Multimode VIFs or Single mode.

The switch we are using is a single 5406zl chassis with four 24 port modules.  I'm sure you've noticed that we're rather exposed in the event of a chassis failure but this is a risk we are prepared to bear.  It does have the advantantage of making trunking easier though as everything is going through one switch.

Now, my question is:
1.            Do we configure the NICs on the VM host side as a standard trunk (using 'route based on IP')?
2.            Should we configure these switch ports as standard trunk?
3.            Do we configure the vif on the NetApp side as LACP or a multimode trunk (bearing in mind the filer can do LACP but the VMHosts can't)?

Question 3 is probably the important on here.

Also,

Help appreciated.
Richard
0
Comment
Question by:incesupport
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 5
  • 4
  • 2
  • +1
12 Comments
 
LVL 30

Assisted Solution

by:IanTh
IanTh earned 50 total points
ID: 38788032
0
 

Author Comment

by:incesupport
ID: 38788196
Yes, I saw this. It's unfortunate but we are not licensed for VDS.
If I was to summarise my question, it would be to ask whether the accepted solution is to use LACP on the NetApp end and a standard (non-LACP trunk) on the VMWare end - and configure the switch accordingly.
0
 
LVL 121

Assisted Solution

by:Andrew Hancock (VMware vExpert / EE MVE^2)
Andrew Hancock (VMware vExpert / EE MVE^2) earned 50 total points
ID: 38788514
I would recommend standard trunks and non-LACP which is HP special!
0
Edgartown IT Case Study

Learn about Edgartown's quest to ensure the safety and security of the entire town's employee and citizen data. Read the case study!

 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38789514
Port-channels exist between the switch and a connected device. Therefore you can have a non-LACP port-channel on the ESXi side and an LACP port-channel on the NetApp side.
0
 

Author Comment

by:incesupport
ID: 38792868
Thank you FilipZahradnik, that's exactly what I wanted to hear.

Also, thank you to hanccocka, take your point about HP switches being a little bit 'special', especially as I've been raised on Cisco.

I'll give this thread a few more days then dish out some points.
0
 
LVL 30

Expert Comment

by:IanTh
ID: 38792910
I thought vds just needed enterprise plus
0
 

Author Comment

by:incesupport
ID: 38792982
You are correct, I worded by initial question badly.  We don't have E+ licenses, so don't have this available to us.
0
 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38795790
Just to expand my answer:

We use this setup (non-LACP port-channel on ESX side, LACP on NetApp side) all the time. When using NFS, 2 NICs and IP hash load balancing is all you need on the ESX side (the setup is a bit different for iSCSI). I can't help with HP configuration as I'm not familiar with HP switches, but on Cisco we use 'etherchannel mode on' on the ports connected to ESX and 'etherchannel mode lacp' on the ports connected to NetApp. On the NetApp, use ifgrp type lacp and loadbalancing policy ip in the ifgrp command.
0
 

Author Comment

by:incesupport
ID: 38797331
Excellent, the perfect answer.  Your setup sounds remarkably similar to our (with the exception of the networking side).  Did you have any other challenges or issues?

Depending on what you read, flow control isn't necessary in modern switches so can be switched off.

Are you using jumbo frames?
0
 
LVL 9

Accepted Solution

by:
FilipZahradnik earned 400 total points
ID: 38797581
Re challenges:
One inherent issue is that NFS at this point does not do multipath I/O. So at any given time, only one ESX NIC from each server will be used to connect to each NetApp controller.

Re Flow Control:
We leave flow control disabled in most circumstances.
TR-3749 NetApp Storage Best Practices for VMware vSphere (http://www.netapp.com/us/media/tr-3749.pdf, page 25) says:
"For modern network equipment, especially 10GbE equipment, NetApp recommends turning off flow control and allowing congestion management to be performed higher in the network stack."

Re Jumbo Frames:
We enable jumbo frames as a matter of policy, unless there are special circumstances. Make sure you test jumbo frames end to end by pinging the NFS interface on NetApp from each ESX host using 'vmkping -d -s 8900 <netapp_nfs_ip>'.
0
 

Author Comment

by:incesupport
ID: 38798004
Challenges:
Would this be the same at the filer end too?  Meaning only one NIC on the filer will be serving NFS?  or would we expect, as an example, vm host 1,2,3 to use one nic to filer nic1 and host 4,5,6 to use the other filer nic?  To clarify, if the filer has 2 nics serving NFS, will it be serving a total of 2Gb to the VM estate or 1Gb?

Flow Control:
Perfect, exactly what I thought.  Would you say that this holds true for good 1Gb adapters connected to a decent switch too?

Jumbo Frames:
Excellent.  Got caught out by this in the proof of concept.  After swapping out every cable, trying different switch ports etc, it suddenly hit me that I'd forgotten to set jumbo frames on the vm kernel - doh.
0
 
LVL 9

Expert Comment

by:FilipZahradnik
ID: 38802777
Re challenges:
You are right, with the NetApp controller talking to multiple ESX hosts, IP hash loadbalancing should distribute those connections accross the controller NICs. Also, the ESX host should use different NIC for each NetApp controller, if you have more than one.

Re flow control:
I routinely leave flow control disabled on 1Gb equipment as well. As long as the switches are recent, you should be fine.

Re jumbo frames:
I always follow this KB when enabling jumbo frames on ESX:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1038827
On NetApp and on switches jumbo frames are pretty straight-forward.
0

Featured Post

Comprehensive Backup Solutions for Microsoft

Acronis protects the complete Microsoft technology stack: Windows Server, Windows PC, laptop and Surface data; Microsoft business applications; Microsoft Hyper-V; Azure VMs; Microsoft Windows Server 2016; Microsoft Exchange 2016 and SQL Server 2016.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

This article aims to explain the working of CircularLogArchiver. This tool was designed to solve the buildup of log file in cases where systems do not support circular logging or where circular logging is not enabled
In this article we will learn how to backup a VMware farm using Nakivo Backup & Replication. In this tutorial we will install the software on a Windows 2012 R2 Server.
This tutorial will walk an individual through the steps necessary to enable the VMware\Hyper-V licensed feature of Backup Exec 2012. In addition, how to add a VMware server and configure a backup job. The first step is to acquire the necessary licen…
This video shows you how to use a vSphere client to connect to your ESX host as the root user. Demonstrates the basic connection of bypassing certification set up. Demonstrates how to access the traditional view to begin managing your virtual mac…

690 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question