NFS redhat 7.3 problem

dryzone
dryzone used Ask the Experts™
on
I found some problems when I upgraded my NFS server to RH7.3

My exports and hosts and hosts.allow files were the same as with RH7.2 previously running.
As an example my exports file contains the following mount rule.
" /public                     *(rw,no_root_squash) "

What I now find is that if a client want to mount a volume on the NFS server, then suddenly permission is denied.
"mount -t nfs   192.168.1.253:/public /public "   
now returns
mount: 192.168.1.253:/public failed, reason given by server: Permission denied.
Obviously some more strict access controlls have been added in RH7.3 which I cannot seem to resolve reading the man pages.

Anyone has an idea?
NFS works on RH7.2,1..6.2 but not 7.3 with above details.
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
Top Expert 2005

Commented:
That sounds familiar... I changed my exports file from the form you used to:

/nfs0   *.entrophy-free.net(rw,no_root_squash)

where I had a local DNS and all local clients in the DNS. Where I couldn't use a DNS reverse lookup I use something like:

/nfs0   10.0.0.0/24(rw,no_root_squash) \
        10.0.1.0/24(rw,no_root_squash)

Author

Commented:
I still find it peculiar.
I interchange a hard-disk with RH7.0 and RH7.3, where the latter is just the original of the 7.0 disk upgraded to 7.3.
On 7.0 the nfs volumes can be mounted by clients.
With 7.3, forget it UNLESS I reformat all the drives back to EXT2! Then at least one of them mounts although all the settings in exports and fstab are identical. That is inconsistent and crazy.

The culprit seems to be nfs w.r.t EXT3.
It is a real mess. The man pages of ext3 and nfs do not give any detail of migrating to ext3.

Given the fact that the reason ext3 was developed was due to BG's critisism of EXT2 not being journaling and Reisser eventually corrupting your data, and the short time it took to cook up EXT3, the problem lies at the door of EXT3 as it was definately a backyard quickfix job.

What is not clear from your answer is whether you actually had the problem and solved it as above. If so, then obviously ext3+nfs demands local DNS and EXT2+nfs not.

If this is the case I just see death for Linux homeusers who do not intend to set up DNS.

In general RH7.3 is probably the worst Linux distro till date. I had them all and even RH7.0 shines in comparison.

Author

Commented:
The mounting error message.
[root@gateway root]# mount -t nfs io:/public /public
mount: io:/public failed, reason given by server: Permission denied
[root@gateway root]#
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Author

Commented:
I changed as you prescibed by wildcarding my intranet
/public   192.168.1.*/24(rw,no_root_squash)

Still the same problem
NOTE: I did do an nfs stop  , un and remounted /public and do an nfs restart again to make sure all changes settled in after the edits in exports.
Top Expert 2005

Commented:
Hmm, I suspect something else is going on here. And yes I did encounter the problem of "permission denied" when using "*(rw,no_root_squash)" that was fixed by using a wildcarded hostname or by using a network/netmask.

Have you checked to make sure that you don't have a firewall running on the server? That would certainly prevent clents from mounting exported volumes. Can you mount the volume on the server (mount localhost:/public /mnt)?

It's not clear to me how you upgraded the system. Was it an in-place upgrade, or an install of 7.3? Two of my 7.3 systems were brought up to 7.3 by doing a normal install to the system disk and the data disks were converted from ext2 to ext3. NFS works normally on those. Another system was brought up to 7.3 by an in-place upgrade of 7.0 and its filesystems converted. It doesn't exhibit any NFS problems either.

I haven't seen anything that would suggest that there are any problems with ext3 vs ext2. And that's not overly suprising since an ext3 file system is exactly an ext2 filesystem plus journal. You can even take an ext3 disk to an older system and mount and use it in the normal manner, the journal is just ignored.

Author

Commented:
1) No firewall as I dont need them on the intranet.
Firewalls cannot be the problem as one of the volumesd DOES mount (Ext2) while the ext3 volumes refuses otherwise we have a contradiction.
I also stopped and removed both iptables and IPchainns.

2) I tried both.. Upgraded to RH7.3 and new install. Both fail to export EXT3 but does export all original ext2 volumes.

3) YES, the nfs volume does mount via nfs on the nfs server but denies any client except itself.
In detail
mount -t nfs io:/public /2
mounted the troublesome EXT3 volume /public on /2
Clients fail with permission denied.

The only common demonitor to the problem I see is EXT3.....and it rhimes.
Top Expert 2005

Commented:
Okay, if the volume can be mounted on the server that would tend to imply that it isn't an ext2 vs ext3 problem. The export is happening or you wouldn't be able to mount the volume on the server.

So, something else is wrong. What are the permissions on /public and below? Are the clients doing a hard mount as the root user, or is the mount occuring via an automount?

Is this a stock 7.3 box or has it been updated (up2date or manually)?

Author

Commented:
I resolved it in a crude way. Dropped RH7.3 on that server Installed mandrake and it worked even with ext3 there seems to be a bug in the RH7.3 distro.

I will give you the points as I changed direction completely, but will it be OK if I just ask a couple of minor sysadmin questions to justify the points. You only need to answer once or twice. That's it.

Author

Commented:
I resolved it in a crude way. Dropped RH7.3 on that server Installed mandrake and it worked even with ext3 there seems to be a bug in the RH7.3 distro.

I will give you the points as I changed direction completely, but will it be OK if I just ask a couple of minor sysadmin questions to justify the points. You only need to answer once or twice. That's it.
Top Expert 2005
Commented:
I guess there could be a bug with ext3 & NFS, but I haven't seen any evidence of it on any of my 7.3 servers. It may be significant that all of them are kept up to date w/respect to the RH errata.

Sure, ask away with your questions and I'll try to answer them.

Commented:
I'm not sure of this but I don't think you should use a wildcard with a <network>/<netbits> spec.

Try "/public   192.168.1.0/24(rw,no_root_squash)"



Commented:
Also, do you have any excess control on your portmapper?

Author

Commented:
Jlevie.
As discussed a few questions.
Stale nfs handles is a major pain.
Sometimes I need to remount volumes and then all is stale.
Even if I remove the mount info from /etc/mtab and erase /etc/mtab~ to satisfy locking, some versions of RH still present Stale handles and i need to reboot to get rid of them..painful if your machine has been up for 200+days.  
umount -a returns stale.
/etc/rc.d/init.d /nfs restart or stop&start does not rid them either but I dont think the latter will have any influence as a client ddefinately do not need nfs to mount remote nfs server volumes. I guess it would be rpc or something rather.

Anyway if you know let me know.
Top Expert 2005

Commented:
yes, stale NFS file handles are a major pain. Most of the time the only way to clear them is to reboot the client. But, there are ways to take evasive action and reduce the likelyhood of them occuring in some cases.

One technique of reducing the likelhood of a stale NFS file handle is to use an aoutomounter for NFS resources that aren't contantly in use. For example, a users's home dir is a good candidate. With an automounter the NFS mount is done "on demand" and the mount will timeout after a period of inactivity (usually 15 minutes or so). Obviously, if there's no NFS mount you can't get a stale file handle.

I'm somewhat partial to the amd automounter because it has significantly more capability than autofs. As an example of it's use I've got a web server that occasionally needs access to data stored on another system. So I created an amd map like:

[ /net ]
/defaults fs:=${autodir}/${rhost}/root/${rfs};opts:=nosuid,nodev
*       rhost:=${key};type:=host;rfs:=/

On the web server I made a sym link like:

ln -s /net/chaos/nfs0/databank /opt/databank

When the web server needs the data it simply accesses it as /opt/databank/.... Amd sees the access request and mounts the resource. 15 minutes after the last access to /opt/databank the resource is unmounted.

Obviously this doesn't help if continuous access to a resource is required.

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial