lhrslsshahi
asked on
Linux NFS share not mounting from vSphere 4.1 host
When trying to mount a NFS Linux share from the vSphere 4.1 host I get the below.
NFS Errors 149, 946, 160
The same NFS shares are accessible from the another host.
I can ping the NFS server from the host. Netcat reaches the NFS server from the host.
The NFS server can also ping the VMware host and resolve DNS.
Dec 22 10:07:19 vmkernel: 1143:21:50:57.036 cpu2:4310573)NFS: 149: Command: (mount) Server: (192.168.20.10) IP: (192.168.30.10) Path: (/finkdr) Label: (backup) Options: (None)
Dec 22 10:07:49 vmkernel: 1143:21:51:27.089 cpu3:4310573)WARNING: NFS: 946: MOUNT RPC failed with RPC status 13 (RPC was aborted due to timeout) trying to mount Server (192.168.20.10) Path (/finkdr)
Dec 22 10:07:49 vobd: Dec 22 10:07:49.956: 98833888868410us: [esx.problem.vmfs.nfs.moun t.connect. failed] Failed to mount to server 192.168.20.10 mount point /finkdr. Error: Unable to connect to NFS server.
Dec 22 10:07:49 vmkernel: 1143:21:51:27.089 cpu3:4310573)NFS: 160: NFS mount 192.168.30.10:/finkdr failed: Unable to connect to NFS server
NFS Errors 149, 946, 160
The same NFS shares are accessible from the another host.
I can ping the NFS server from the host. Netcat reaches the NFS server from the host.
The NFS server can also ping the VMware host and resolve DNS.
Dec 22 10:07:19 vmkernel: 1143:21:50:57.036 cpu2:4310573)NFS: 149: Command: (mount) Server: (192.168.20.10) IP: (192.168.30.10) Path: (/finkdr) Label: (backup) Options: (None)
Dec 22 10:07:49 vmkernel: 1143:21:51:27.089 cpu3:4310573)WARNING: NFS: 946: MOUNT RPC failed with RPC status 13 (RPC was aborted due to timeout) trying to mount Server (192.168.20.10) Path (/finkdr)
Dec 22 10:07:49 vobd: Dec 22 10:07:49.956: 98833888868410us: [esx.problem.vmfs.nfs.moun
Dec 22 10:07:49 vmkernel: 1143:21:51:27.089 cpu3:4310573)NFS: 160: NFS mount 192.168.30.10:/finkdr failed: Unable to connect to NFS server
can you vmkping the Linux NFS server/workstation?
do you also have a VMKernel defined, which is used to carry the traffic?
also see these tips
VMware KB: Unable to mount NFS datastore
VMware KB: Adding an NFS datastore to an ESX/ESXi host fails with error
VMware KB: Cannot connect to NFS network share
do you also have a VMKernel defined, which is used to carry the traffic?
also see these tips
VMware KB: Unable to mount NFS datastore
VMware KB: Adding an NFS datastore to an ESX/ESXi host fails with error
VMware KB: Cannot connect to NFS network share
ASKER
I have taken the entries and from the /etc/resolv.conf on the host and NFS server
and have placed in the /etc/hosts still no joy.
I can vmkping NFS by ip and name and is successful.
The traffic passes on the management vmk0 same as the other working host
It was working last week until I had to rebuild the VM no changes have been made on the host and now only is accessible from one host!
As a test I have created a Windows NFS share same problem can be accessed from the other host
and have placed in the /etc/hosts still no joy.
I can vmkping NFS by ip and name and is successful.
The traffic passes on the management vmk0 same as the other working host
It was working last week until I had to rebuild the VM no changes have been made on the host and now only is accessible from one host!
As a test I have created a Windows NFS share same problem can be accessed from the other host
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Andy,
Thanks for these videos I went through the second video, you created a seperate
VMKernel Interface, I tried that it didn't make any difference. We have about 15 hosts that have no dedicated VMKernel Interface and have had no issues with Windows or Linux NFS shares before. It's always been under the VMKernel Interface - Management
About to hit the wall! :-)
Thanks for these videos I went through the second video, you created a seperate
VMKernel Interface, I tried that it didn't make any difference. We have about 15 hosts that have no dedicated VMKernel Interface and have had no issues with Windows or Linux NFS shares before. It's always been under the VMKernel Interface - Management
About to hit the wall! :-)
As long as your VMKernel Interface is on the same network as your NFS NAS.
which if you can vmkping the NAS, that indicates traffic can reach the NFS share.
Jumbo Frames are not enabled when they should be enabled?
You stated it worked until you re-built the VM - this VM is the NFS server?
So what is the difference between this working single hosts and all the others?
IP Address (if using) for authentication, is enabled?
the same share name is being used?
Can you upload screenshots of networking on working and non-working hosts?
just looking at these IP Addresses
192.168.20.10 - this is the server?
192.168.30.10 - this is the NAS (NFS)
different networks? routing and subnet masks are correct?
which if you can vmkping the NAS, that indicates traffic can reach the NFS share.
Jumbo Frames are not enabled when they should be enabled?
You stated it worked until you re-built the VM - this VM is the NFS server?
So what is the difference between this working single hosts and all the others?
IP Address (if using) for authentication, is enabled?
the same share name is being used?
Can you upload screenshots of networking on working and non-working hosts?
just looking at these IP Addresses
192.168.20.10 - this is the server?
192.168.30.10 - this is the NAS (NFS)
different networks? routing and subnet masks are correct?
ASKER
The VM 192.168.30.10 is the NFS server (192.168.20.10 is a typo)
The problem host is 192.168.31.2 and the working host 192.168.30.5
The MTU is set to 1500 for switches on both hosts (Jumbo frames not enabled)
Same share name as before.
I have attached screenshots.
problem-host.PNG
workinghost.PNG
The problem host is 192.168.31.2 and the working host 192.168.30.5
The MTU is set to 1500 for switches on both hosts (Jumbo frames not enabled)
Same share name as before.
I have attached screenshots.
problem-host.PNG
workinghost.PNG
keep in mind no root squash needs to be enabled so that there is no authentication occuring during connection
working host is:-
192.168.30.5
non-working is
192.168.31.2
Linux NFS is 192.168.30.10
so if you put your host as 192.168.30.11 does it work?
why the IP Address change to 192.168.31.2 ?
192.168.30.5
non-working is
192.168.31.2
Linux NFS is 192.168.30.10
so if you put your host as 192.168.30.11 does it work?
why the IP Address change to 192.168.31.2 ?
ASKER
Never had to use no_root )squash before, have tried it didn't work.
Unfortunately can't change the ip host as its in Production.
The VMs that reside on the 192.168.31.2 are on a different domain, it's been working for 4 years not entirely sure why its stopped working other than me rebuilding the NFS Server!
Unfortunately can't change the ip host as its in Production.
The VMs that reside on the 192.168.31.2 are on a different domain, it's been working for 4 years not entirely sure why its stopped working other than me rebuilding the NFS Server!
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
OMG!! That worked addding VMKernel Interface as 192.168.30.11.
Thanks Andy, life saver, not sure how it was working before!?
Thanks Andy, life saver, not sure how it was working before!?
Maybe you had another VMKernel Interface set?
traffic is not being routed correctly....from 192.168.30.x to 192.168.31.x, subnet mask incorrect or something.
traffic is not being routed correctly....from 192.168.30.x to 192.168.31.x, subnet mask incorrect or something.
ASKER
I honestly don't remember having a VMKernel Interface. Oh well.. Just want to thank you for your help. Have a good Christmas and new year.
ASKER
Andy is always on the money
no problems, Merry Chrimbo.
Have a quick read here for possible solutions.