• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 187
  • Last Modified:

The file test of "if (-e $filename)" takes a billion years to come back even when the file is there!

Ok, I'll expand a bit more...

I have a Linux server with an NFS mounted directory on a neighbouring DG/UX machine.

I have a script, that part-way through checks for the existence of a file, that is on the remote machine (e.g.)

$filename = "/dir1/dir2/filename.txt";

if (-e $filename)
{
# Set some variables here
}

This file will be put here by a BASIC program running on the other machine, and therefore I need to keep checking until I see it there (I know it will be there categorically), so I put the above in a while loop, setting a "found" variable when I see it, and sleeping for a second before retrying.

The problem is, that this takes about 20-30 seconds to see the file, even though I can see it there within about 2 seconds.  I can prove this by changing the "If (-e..." statement above to a simple "sleep(4);" before setting my variables, it works fine.

Now you're thinking "why not just do that then?" - because, depending on what information is being processed on the other machine, it might take longer than (x) seconds to write the file, and if I check too soon, my script (and resulting web page) will fail :-(

Two questions then, does anybody know why it takes so long - is it because the file is remotely mounted via NFS?  Also, how can I make it faster - is there a better method of testing the existence of a file, and maybe is there a better entire loop that would check really fast, and respond as soon as it finds the file?

Hope you can help - much hair being removed :-)

Neil
0
NTIVER
Asked:
NTIVER
1 Solution
 
kanduraCommented:
I suspect it _is_ due to the nfs mount. Can you try the same thing locally, just to rule out it's the -e test?

What are you going to do with the file once it's available?

Maybe simply trying to open it might be quicker:

    if(open F, $file) {
        ...
0
 
TintinCommented:
A better way would be for the BASIC prog to write the file with a temporary name, and then rename it at the end.  That way you don't need to worry about if the file is incomplete.
0
 
ahoffmannCommented:
sounds like a NFS caching problem, probably your nfsd on the remote site
0
Free Tool: ZipGrep

ZipGrep is a utility that can list and search zip (.war, .ear, .jar, etc) archives for text patterns, without the need to extract the archive's contents.

One of a set of tools we're offering as a way to say thank you for being a part of the community.

 
NTIVERAuthor Commented:
Hi all - thank you for your comments.

OK, I'll try the if (open F, $file) syntax today to see if that works.

My process is:

User fills in HTML form with criteria, this gets submitted to a perl script - using the entered fields, this then writes a file to a remotely mounted directory on my other box.  Here it is picked up by my BASIC program (running from a polling routine), processed, and put back in the same directory.  My perl script then grabs it (once it exists!), and builds a web page of results in a new HTML page.  We're basically enabling people to enquire on our database from the web.

Tintin: With regard to your comment - the basic program takes less than a second to process the data and write the file, and writes it with a unique name (e.g. 13579.txt) any way, so the same file will never exist twice.  The reason for this is so that two people can use the web page at the same time, and each one has their own unique identifier.

I'll let you know how I get on with the open...

Thanks for all your comments, and if you think of anything else in the mean time - let me know :-)

Neil
0
 
NTIVERAuthor Commented:
Hi all.

OK, I've changed it to be...

if(open...)

That HAS made it faster, however it still isn't seeing the file as soon as it is placed there ready, it still takes a good few seconds to see it.

One thing I have noticed thouigh, if while the script is waiting for the file to appear, I do an "ls -l" against the remote directory to refresh my list of files, the script comes back immediately after.  I'm thinking, is there a polling interval or similar in which remote NFS directories are refreshed?  If so, is it paramterised or can it be made faster?

Many thanks all for your help - we're getting there :-)

Of course, I could cheat and set a script running every second to do the "ls -l" command and pipe it through to /dev/null, but that's a bit rubbish.

Neil
0
 
kanduraCommented:
Looks like you do indeed need to tune your nfs setup. I'm not much of a sysadmin though. Try one of the OS topic areas here, I'm sure the folks there can help you with that.
0
 
NTIVERAuthor Commented:
Thank you for all who contributed - I'll post a further question RE the NFS problem in a more appropriate area, as although my situation is better than before, it's not as fast as I think it should be.

Neil
0

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now