Link to home
Start Free TrialLog in
Avatar of Mark
Mark

asked on

NFS mount timeout

I'm having trouble mounting a remote nfs mountpoint. The mountpoint is exported:

>showmount -e localhost
Export list for localhost:
/tmp/trash 24.96.yyy.xxx

and on the client host at 24.96.yyy.xxx I can telnet to ports 111 and 2049, so the firewall is open. The client fstab has:

lincoln:/tmp/trash /tmp/trash  nfs rw      0   0

When I try to mount:
1 14:52:21 root@server:~
> mount /tmp/trash
mount.nfs: Connection timed out
1 14:54:33 root@server:~

Open in new window

As you can see, this timeout takes about 70 seconds.

Any idea?
Avatar of arnold
arnold
Flag of United States of America image

NFS over Internet


Check /etc/exports
To see your export rules and make sure your ip matches the rule.
Why not setup a VPN ? Or using ssh tunnels over which you can mount the NFS share.

What is your ping/latency to the destination? Within fstab or if it is mounted on demand, you can adjust (increase timeout value)

Y
SOLUTION
Avatar of Mark
Mark

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Different behavior. Telnet, you are the slowest in the link. If you hit a key, and wait. NFS when timeout for response is reached either starts retransmitting, or if hard set, interprets the timeout as a loss of .....

Try the following
Presumably you do not have your local system setup as an NFS server.
ssh -L 111:localhost:111 -L 2049:localhost:2049 user@remote
You would need to make sure the ssh connection does not timeout.....

Your exports file needs to include NFS access from localhost since that is what the source of the connection will be

mount -t NFS localhost:/temp/trash  /mnt

See If that functions better....

Rsync is a transfer that will retry. ...
Avatar of Mark
Mark

ASKER

showmount -e targethost doesn't work either. Also times out.
Avatar of Mark

ASKER

arnold:
Try the following
Presumably you do not have your local system setup as an NFS server.
ssh -L 111:localhost:111 -L 2049:localhost:2049 user@remote
You would need to make sure the ssh connection does not timeout.....
Just to be clear, the local host name is "server" that is the one trying to mount the nfs directory. The remote host is "lincoln", that is the one hosting the nfs mountpoint. So, from "server", as root I tried:
> ssh -L 111:localhost:111 -L 2049:localhost:2049 mfoley@lincoln
mfoley@lincoln's password:
bind: Address already in use
bind: Address already in use
Last login: Sat Oct  1 01:14:55 2016 from 24.96.xxx.yyy
Linux 3.10.17-smp.

Open in new window

It did ask me for a password. It also logged me into the mfoley account on lincoln. Does this tell you anything?

Your exports file needs to include NFS access from localhost since that is what the source of the connection will be
Not sure what you mean. Can you give example?

mount -t NFS localhost:/temp/trash  /mnt
I asume you want me to do this from the hosting server, "lincoln". Interesting result:
1 01:41:35 root@lincoln:~
>mount -t nfs localhost:/temp/trash  /mnt
mount.nfs: access denied by server while mounting localhost:/temp/trash

Open in new window

Perhaps because the export specifies the IP? I'll remove that and try again ...

... Nope, even with /etc/exports having "/tmp/trash      *(rw,no_root_squash)" I still get the access denied. Perhaps this is a clue!?

RATS! My bad. I copy/pasted your mount string and just noticed that you wrote "temp" instead of "tmp". When I corrected the spelling, it mounted just fine. :(
Rather errors you see is because 111 and 2049 on your local system are in use already
lsof -i:111
lsof -I:2049

The mount needs to run on the system from where you usually access it.

SystemA is the remote with Lincoln the one running as the NFS servers.
So you ssh from SystemA to Lincoln with the local tunnels.
On the SystemA, you would try to mount the NFS share via the ssh tunnel.

Presumably, your rsync rubs over an ssh connection/tunnel.
Do you have an option to setup a VPN connection, IPSec, PPTP, openvpn?
With a VPN, you would be defining access based on the private IPs.

mount -t NFS Lincoln:/tmp/trash /mnt
Where Lincoln is the private ip of the system based on the VPN definition.
Your exports file would reference the LAN Ip of the SystemA......
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Mark

ASKER

I found the answer. Points to Arnold for playing!