coanda
asked on
RPC/portmap problems with NFS client
srv1: NFS server (working) and client (not working)
srv2: NFS server (not working) and client (working, connected to srv1)
srv1 is sharing out my home directories to various Linux machines via NFS, I have Debian, Ubuntu, Suse, and Fedora all connected to this server no problem. I want srv2 to share out a set of directories to all of the same Linux machines. I've setup NFS pretty much the same except for the exports. The hosts.allow, hosts.deny, and hosts files are all very similar.
If I "rpcinfo -p localhost" on srv2 I get:
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32769 nlockmgr
100021 3 udp 32769 nlockmgr
100021 4 udp 32769 nlockmgr
100021 1 tcp 48227 nlockmgr
100021 3 tcp 48227 nlockmgr
100021 4 tcp 48227 nlockmgr
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 618 mountd
100005 1 tcp 621 mountd
100005 2 udp 618 mountd
100005 2 tcp 621 mountd
100005 3 udp 618 mountd
100005 3 tcp 621 mountd
100024 1 udp 32784 status
100024 1 tcp 35334 status
But if I "rpcinfo -p srv2" from srv1 I get:
No remote programs registered.
"showmount -e srv2" from any client gives a similar error.
I've disabled IPv6 and flushed all of my firewall rules using
iptables -F
iptables-save
One minor difference that I don't know whether it matters or not is that If I execute "ps aux | grep portmap" on the working server it shows that /sbin/portmap is being run by rpc and on the one that doesn't work it is being run as daemon. Should this make a difference?
I've quadruple checked my files and I'm pretty much lost on what else to do.
srv2: NFS server (not working) and client (working, connected to srv1)
srv1 is sharing out my home directories to various Linux machines via NFS, I have Debian, Ubuntu, Suse, and Fedora all connected to this server no problem. I want srv2 to share out a set of directories to all of the same Linux machines. I've setup NFS pretty much the same except for the exports. The hosts.allow, hosts.deny, and hosts files are all very similar.
If I "rpcinfo -p localhost" on srv2 I get:
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100021 1 udp 32769 nlockmgr
100021 3 udp 32769 nlockmgr
100021 4 udp 32769 nlockmgr
100021 1 tcp 48227 nlockmgr
100021 3 tcp 48227 nlockmgr
100021 4 tcp 48227 nlockmgr
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 618 mountd
100005 1 tcp 621 mountd
100005 2 udp 618 mountd
100005 2 tcp 621 mountd
100005 3 udp 618 mountd
100005 3 tcp 621 mountd
100024 1 udp 32784 status
100024 1 tcp 35334 status
But if I "rpcinfo -p srv2" from srv1 I get:
No remote programs registered.
"showmount -e srv2" from any client gives a similar error.
I've disabled IPv6 and flushed all of my firewall rules using
iptables -F
iptables-save
One minor difference that I don't know whether it matters or not is that If I execute "ps aux | grep portmap" on the working server it shows that /sbin/portmap is being run by rpc and on the one that doesn't work it is being run as daemon. Should this make a difference?
I've quadruple checked my files and I'm pretty much lost on what else to do.
What error do you get when on svr1 you try a mount of an svr2 disk?
ASKER
No remote programs registered.
ASKER
ignore that, I misunderstood.
mount: mount to NFS server 'srv2' failed: RPC Error: Program not registered.
mount: mount to NFS server 'srv2' failed: RPC Error: Program not registered.
Can you telnet from svr1 to svr2 and verify you really land up on svr2 and not some other machine. On loging in, try rpcinfo -p localhost
ASKER
I don't have a telnet service enabled on srv2 but I can ssh into it from srv1 and rpcinfo -p localhost comes up with same output as it does from the srv2 console.
ASKER
This could possibly be a result of the kernel that I'm using, I've gone into menuconfig to reconfigure and I have nfs support compile with module support. Does anyone know if this will have an effect with network access to nfs shares?
Unlikely to be a problem. You can do an lsmod on svr2 to verify that the nfs modules are loaded. Muntd or exportfs should have pulled them in. Worth checking though - as root you can modprobe them if by some chanve they're not loaded.
I just tried an experiment which might interest you: my router/ /firewall can nfs-mount file systems but doesn't usually (the facility is only there for maintenance). rpcinfo works fine to this system. The nfs module did not show in lsmod. I then did an nfs mount and the nfs module was mounted.
On my server system, the nfsd module is also mounted and there are nfsd daemons running
I just tried an experiment which might interest you: my router/ /firewall can nfs-mount file systems but doesn't usually (the facility is only there for maintenance). rpcinfo works fine to this system. The nfs module did not show in lsmod. I then did an nfs mount and the nfs module was mounted.
On my server system, the nfsd module is also mounted and there are nfsd daemons running
Muntd should have been Mountd
ASKER
You're right, nfs was loaded my exportfs, lockd, and sunrpc as well as some others. Hopefully that means compiling a new kernel is not necessary since I seem to be running into troubles while doing that as well.
Is there a possibility that Debian uses NFSv4 by default? I've only been trying to access from clients using v3.
Is there a possibility that Debian uses NFSv4 by default? I've only been trying to access from clients using v3.
I really don't know what they would have configured. You could check with make xconfig or make menuconfig - assuming you can find the .config your system was built with. It may be available as /proc/config.gz.
I think the real problem is whatever causes the failure of rpcinfo -p svr2 to show anything when issued from svr1. I did think that possibly svr1's /etc/hosts had svr2's IP address wrong, which is why I asked you to try a telnet. If using static addresses, it might still be worth checking that file for consistency on both systems. Otherwise, I'm starting to run out of ideas
I think the real problem is whatever causes the failure of rpcinfo -p svr2 to show anything when issued from svr1. I did think that possibly svr1's /etc/hosts had svr2's IP address wrong, which is why I asked you to try a telnet. If using static addresses, it might still be worth checking that file for consistency on both systems. Otherwise, I'm starting to run out of ideas
Your portmap is not running or answering correctly.
1) Disable the firewall completely!
2) Disable selinux
3) telnet from srv1 (the working one) to srv2 (the not-working one) on port 111 (telnet srv2 111), if you get a connection - even for just a short while, that proves *something* is listening on port 111, if you don't, nothing is listening on 111. To prove the successful case is portmap, switch off portmap (/etc/init.d/portmap stop) & repeat the test. If you get a connection, something else (apart from portmap) is listening there and portmap can't listen where something else is listening. If you don't get a connection portmap was listening there.
1) Disable the firewall completely!
2) Disable selinux
3) telnet from srv1 (the working one) to srv2 (the not-working one) on port 111 (telnet srv2 111), if you get a connection - even for just a short while, that proves *something* is listening on port 111, if you don't, nothing is listening on 111. To prove the successful case is portmap, switch off portmap (/etc/init.d/portmap stop) & repeat the test. If you get a connection, something else (apart from portmap) is listening there and portmap can't listen where something else is listening. If you don't get a connection portmap was listening there.
ASKER
1) How can I disable the firewall in Debian completely, I've flushed iptables, but I can't shut the service down because there is no /etc/init.d/iptables to stop
2) I changed /etc/selinux/config so that the policy is disabled, then rebooted
3) I can telnet into port 111 when portmap is running and I'm not able to when it isn't
2) I changed /etc/selinux/config so that the policy is disabled, then rebooted
3) I can telnet into port 111 when portmap is running and I'm not able to when it isn't
Iptables is not a service, it's part of the IP stack in linux. If you have flushed all the rules, it should have no effect.
As regards your point 3 - it seems you can definitely get a network connection to svr2's portmap. Does rpcinfo now report anything? How does rpcinfo -p svr2 behave when portmap is not running on svr2?
As regards your point 3 - it seems you can definitely get a network connection to svr2's portmap. Does rpcinfo now report anything? How does rpcinfo -p svr2 behave when portmap is not running on svr2?
ASKER
rpcinfo -p srv2 (from srv1) still returns with the error "No remote programs registered" when the server is running, and when it's not I get the message "rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused"
Think I found it - I can duplicate the scenario here anyway.
For remote requests, rpcinfo accesses /etc/hosts.allow and /etc/hosts.deny.
A typical hosts.deny file contains the line:
ALL:ALL
meaning that only hosts mentioned in hosts.allow get any service.
You can learn about the format of these files by typing "man -s 5 hosts_access"
For remote requests, rpcinfo accesses /etc/hosts.allow and /etc/hosts.deny.
A typical hosts.deny file contains the line:
ALL:ALL
meaning that only hosts mentioned in hosts.allow get any service.
You can learn about the format of these files by typing "man -s 5 hosts_access"
09:13:49$ rpcinfo -p dullstar # ALL;ALL in dullstar's hosts.deny
No remote programs registered.
09:24:08$ rpcinfo -p dullstar # commented-out ALL:ALL
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100001 5 udp 911 rstatd
100001 3 udp 911 rstatd
100001 2 udp 911 rstatd
100001 1 udp 911 rstatd
ALL;ALL is a typo - should be colon ALL:ALL
ASKER
These are my hosts. files on srv2
/etc/hosts.deny
portmap: ALL
lockd: ALL
mountd: ALL
#rpcbind: ALL
rquotad: ALL
statd: ALL
/etc/hosts.allow
portmap: srv1 , ubufs1 , ubufs2 , vmhost
lockd: srv1 , ubufs1 , ubufs2 , vmhost
mountd: srv1 , ubufs1 , ubufs2 , vmhost
#rpcbind: srv1 , ubufs1 , ubufs2 , vmhost
rquotad: srv1 , ubufs1 , ubufs2 , vmhost
statd: srv1 , ubufs1 , ubufs2 , vmhost
Maybe this is a source of some of my problems, but these files are almost identical to what I have working on a Fedora installation.
/etc/hosts.deny
portmap: ALL
lockd: ALL
mountd: ALL
#rpcbind: ALL
rquotad: ALL
statd: ALL
/etc/hosts.allow
portmap: srv1 , ubufs1 , ubufs2 , vmhost
lockd: srv1 , ubufs1 , ubufs2 , vmhost
mountd: srv1 , ubufs1 , ubufs2 , vmhost
#rpcbind: srv1 , ubufs1 , ubufs2 , vmhost
rquotad: srv1 , ubufs1 , ubufs2 , vmhost
statd: srv1 , ubufs1 , ubufs2 , vmhost
Maybe this is a source of some of my problems, but these files are almost identical to what I have working on a Fedora installation.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Ok, now I get the correct response from rpcinfo on srv1, but this doesn't really feel like the proper way of setting it up. I don't know why it wasn't working before when I was just using the above hosts.allow file, the host names were all correct. Oh well, I'm not sure I care enough to do it by host name so I may try using a subnet later but for now this is good enough. Thanks.