• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1486
  • Last Modified:

NFS with MTU 9000 not working with EmC Celerra and vsphere5 and Cisco SAN switch

I have setup vsphere5 environment with HP dl 580 g7 servers and EMC Celerra SAN (NFS). It's all fine with the exception that when I changeF MTU size on my NFS port group I lose connection to my NFS volumes With default option of 1500 it works fine. When I do show system MTU on Cisco switch  I can see 9000 MTU supported I have direct connection to this SAN switchWhen I try to ping from any of my new esx5 hosts I can only ping with mtu size 1500-28=1472 but not 9000-28=8872 it's holding my project and I can't migrate vm(s) from my current vsphere4.1 environment I have onstalled HP OEM version of ESX5 and i think there is no way i have driver issues Please advice
0
sysprof
Asked:
sysprof
  • 14
  • 7
  • 6
3 Solutions
 
sysprofAuthor Commented:
MTU is globally Set on the switch. Are you saying we need to set it on individual ports? My live esx hosts 4.1 are also coming to same switch and I didn't find any port level settings for NFS.
0
 
BusbarCommented:
ok I thought it is a different switch.
have you configure the MTU on the vswitch and the port groups.
0
Concerto Cloud for Software Providers & ISVs

Can Concerto Cloud Services help you focus on evolving your application offerings, while delivering the best cloud experience to your customers? From DevOps to revenue models and customer support, the answer is yes!

Learn how Concerto can help you.

 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
0
 
sysprofAuthor Commented:
I have changed MTU for my NFS vmkernel port to be 9000. Please find attached screenshots.
networking.JPG
NFS.JPG
0
 
sysprofAuthor Commented:
hanccocka, i have raised this question with more networking specific option and included Cisco group. I understand this is good practice at EE to raise more specific questions. Please do leme know if this is not preferred.
0
 
BusbarCommented:
and on the switch ?!
0
 
sysprofAuthor Commented:
On Cisco switch we have MTU enabled for 9000 My current 4.1 hosts connect to same SAN switch and they support 9000 Thetr is however another NAS switch in EMC environment which I don't know much about
0
 
BusbarCommented:
sorry I meant the vswitch, but the PG will override the vSwitch.
and you are connecting over the SAN switch not the NAS ?! if yes then does the SAN switch configured to carry IP traffic (NAS traffic) because the SAN will be connected to HBA which doesn't carry IP
0
 
sysprofAuthor Commented:
I have not made any changes on the nics like override or failover etc that are part of vSwitch that in turn has NFS as vmkernel port Infact I have not made changes on any of the NICs that are part of any vswitch Does that make any difference?
0
 
sysprofAuthor Commented:
I'm connecting to SAN switch not the NAS one Don't understand what u mean by PG will override vSwitch There is no PG or port group for NFS we can only configure vmkernel port?
0
 
BusbarCommented:
I am confused.
because the SAN switch will connect the esxi over the HBA to the SAN, the HBA can't be configured for IP so you can't connect the ESXi to the NAS using the SAN switch unless you have connected your NICs to the SAN switch.

my bet is that your NICs are not connected to the SAN switch and if they are connected to it your NAS is not connected to it, as it is connected to your NAS switch, am I right !!!
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
The preferred method is to request Assistance on the question in hand, and the mods will add new zones, rather than clutering up EE with Duplicate Questions and wasting Experts time.
0
 
sysprofAuthor Commented:
Busbar this is really getting all confused.To simplify my current esx hosts4.1 and my new esxi5 host all connect to same SAN switch and that SAN switch is connected to NAS switch If my current 4.1 hosts support 9000 MTU then my new 5.0 host should also support MTU of 9000. If its not doing it then problem can be at NIC level in 5.0 host or networking settings on 5.0 host or perhaps there is some thing thaf needs to be configuref at the NAS switch? What would you say is the trouble area since I have provided all the details? Does it make sense?
0
 
BusbarCommented:
ok. I don't see why NAS switch is not connected to the ESXi hosts directly, however you are right if Jumpo frames works in thais setup from the ESXi 4.1 setup then something is wrong with the servers, and I can say it is 90% the NICs. but hard to tell from my side.

can you try connecting the new ESXi servers to the NAS switch directly ?!
0
 
sysprofAuthor Commented:
I don't think so if it will be possible to connect directly we don't have in house expert on EMC since Im using HP OEM version of esx5 then I would rule out any driver related issues I have shown you settings for NFS as well I can also ping SAN switch with MTU 1500 option but not 9000 Can't think anything else?
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Updated the Firmware Yet on the Servers?
0
 
sysprofAuthor Commented:
Just tested another time after firmware upgrade on dl 580 g7 still no joy im afraid
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
Time to escalate to VMware Support.

if you enable the entire vSwitch for 9000 MTU does it work?

did you use vmkping to confirm packet size from Host to NAS?
0
 
sysprofAuthor Commented:
MTU is enabled globally on SAN switch unless some Cisco guru suggests differently I believe there's no switch port level config I have vmkping with 9000 MTU between 5.0 host and SAN switch not to NAS switch Will try it later
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
so. ALL your VMs run with jumbo frames as well?

we normally have a storage network which is enabled for jumbo frames only which is completely isolated from public LAN.
0
 
sysprofAuthor Commented:
Yes all live vsphere 4.1 hosts have 9000 configured for NFS so all VMs connect to NFS storage with Jumbo frames on
0
 
sysprofAuthor Commented:
My problem is that I want to match my live 4.1 environment on new 5.0 one so dont get into any performance issues once I have moves VMs across
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
BUT do your VMs networking use Jumbo Frames, I understand the storage uses jumbo frames, but the VMs do not have to use Jumbo Frames as well.

e.g. do you clients PCs also have jumbo frames enabled?
0
 
sysprofAuthor Commented:
I don't know of any vm level setting for jumbo frames I want my NFS port group to support jumbo frames.
0
 
Andrew Hancock (VMware vExpert / EE MVE^2)VMware and Virtualization ConsultantCommented:
just check your VM nic traffic is not enabled for jumbo frames on the vSwitch, eg vSwitch vmkernel portgroup is the only mtu 9000

what figure are you using 9000, 9216?
0
 
sysprofAuthor Commented:
I'm using 9000-28=8872 VMnic is not enabled for jumbo pack As per my screenshots earlier I have only set NFS port group value to 9000
0

Featured Post

VIDEO: THE CONCERTO CLOUD FOR HEALTHCARE

Modern healthcare requires a modern cloud. View this brief video to understand how the Concerto Cloud for Healthcare can help your organization.

  • 14
  • 7
  • 6
Tackle projects and never again get stuck behind a technical roadblock.
Join Now