Mark
asked on
NFS mount timeout
I'm having trouble mounting a remote nfs mountpoint. The mountpoint is exported:
>showmount -e localhost
Export list for localhost:
/tmp/trash 24.96.yyy.xxx
and on the client host at 24.96.yyy.xxx I can telnet to ports 111 and 2049, so the firewall is open. The client fstab has:
lincoln:/tmp/trash /tmp/trash nfs rw 0 0
When I try to mount:
Any idea?
>showmount -e localhost
Export list for localhost:
/tmp/trash 24.96.yyy.xxx
and on the client host at 24.96.yyy.xxx I can telnet to ports 111 and 2049, so the firewall is open. The client fstab has:
lincoln:/tmp/trash /tmp/trash nfs rw 0 0
When I try to mount:
1 14:52:21 root@server:~
> mount /tmp/trash
mount.nfs: Connection timed out
1 14:54:33 root@server:~
As you can see, this timeout takes about 70 seconds.Any idea?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Different behavior. Telnet, you are the slowest in the link. If you hit a key, and wait. NFS when timeout for response is reached either starts retransmitting, or if hard set, interprets the timeout as a loss of .....
Try the following
Presumably you do not have your local system setup as an NFS server.
ssh -L 111:localhost:111 -L 2049:localhost:2049 user@remote
You would need to make sure the ssh connection does not timeout.....
Your exports file needs to include NFS access from localhost since that is what the source of the connection will be
mount -t NFS localhost:/temp/trash /mnt
See If that functions better....
Rsync is a transfer that will retry. ...
Try the following
Presumably you do not have your local system setup as an NFS server.
ssh -L 111:localhost:111 -L 2049:localhost:2049 user@remote
You would need to make sure the ssh connection does not timeout.....
Your exports file needs to include NFS access from localhost since that is what the source of the connection will be
mount -t NFS localhost:/temp/trash /mnt
See If that functions better....
Rsync is a transfer that will retry. ...
ASKER
showmount -e targethost doesn't work either. Also times out.
ASKER
arnold:
... Nope, even with /etc/exports having "/tmp/trash *(rw,no_root_squash)" I still get the access denied. Perhaps this is a clue!?
RATS! My bad. I copy/pasted your mount string and just noticed that you wrote "temp" instead of "tmp". When I corrected the spelling, it mounted just fine. :(
Try the followingJust to be clear, the local host name is "server" that is the one trying to mount the nfs directory. The remote host is "lincoln", that is the one hosting the nfs mountpoint. So, from "server", as root I tried:
Presumably you do not have your local system setup as an NFS server.
ssh -L 111:localhost:111 -L 2049:localhost:2049 user@remote
You would need to make sure the ssh connection does not timeout.....
> ssh -L 111:localhost:111 -L 2049:localhost:2049 mfoley@lincoln
mfoley@lincoln's password:
bind: Address already in use
bind: Address already in use
Last login: Sat Oct 1 01:14:55 2016 from 24.96.xxx.yyy
Linux 3.10.17-smp.
It did ask me for a password. It also logged me into the mfoley account on lincoln. Does this tell you anything?Not sure what you mean. Can you give example?
Your exports file needs to include NFS access from localhost since that is what the source of the connection will be
I asume you want me to do this from the hosting server, "lincoln". Interesting result:
mount -t NFS localhost:/temp/trash /mnt
1 01:41:35 root@lincoln:~
>mount -t nfs localhost:/temp/trash /mnt
mount.nfs: access denied by server while mounting localhost:/temp/trash
Perhaps because the export specifies the IP? I'll remove that and try again ...... Nope, even with /etc/exports having "/tmp/trash *(rw,no_root_squash)" I still get the access denied. Perhaps this is a clue!?
RATS! My bad. I copy/pasted your mount string and just noticed that you wrote "temp" instead of "tmp". When I corrected the spelling, it mounted just fine. :(
Rather errors you see is because 111 and 2049 on your local system are in use already
lsof -i:111
lsof -I:2049
The mount needs to run on the system from where you usually access it.
SystemA is the remote with Lincoln the one running as the NFS servers.
So you ssh from SystemA to Lincoln with the local tunnels.
On the SystemA, you would try to mount the NFS share via the ssh tunnel.
Presumably, your rsync rubs over an ssh connection/tunnel.
Do you have an option to setup a VPN connection, IPSec, PPTP, openvpn?
With a VPN, you would be defining access based on the private IPs.
mount -t NFS Lincoln:/tmp/trash /mnt
Where Lincoln is the private ip of the system based on the VPN definition.
Your exports file would reference the LAN Ip of the SystemA......
lsof -i:111
lsof -I:2049
The mount needs to run on the system from where you usually access it.
SystemA is the remote with Lincoln the one running as the NFS servers.
So you ssh from SystemA to Lincoln with the local tunnels.
On the SystemA, you would try to mount the NFS share via the ssh tunnel.
Presumably, your rsync rubs over an ssh connection/tunnel.
Do you have an option to setup a VPN connection, IPSec, PPTP, openvpn?
With a VPN, you would be defining access based on the private IPs.
mount -t NFS Lincoln:/tmp/trash /mnt
Where Lincoln is the private ip of the system based on the VPN definition.
Your exports file would reference the LAN Ip of the SystemA......
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
I found the answer. Points to Arnold for playing!
Check /etc/exports
To see your export rules and make sure your ip matches the rule.
Why not setup a VPN ? Or using ssh tunnels over which you can mount the NFS share.
What is your ping/latency to the destination? Within fstab or if it is mounted on demand, you can adjust (increase timeout value)
Y