[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 306
  • Last Modified:

unlink file succeed and no content for the file but the file is still there

I used Perl to detect a directory. If there is a new file coming, I copy this file to dest directory and then unlink this file from the src directory. Most cases are good. But in some rare cases, the file is still left in the src directory. Actually the left files had been copied to dest directory successfully. The left files are zero in size. This rare case usually happens in integral point clock. Does anyone has this problem before?
0
c11v11
Asked:
c11v11
1 Solution
 
SurranoCommented:
There's probably some process holding the file open.
What filesystem(s) are we talking about?
Is the allocated space released in such occasions or still occupied by a "phantom" version of the file?

You may try using "lsof" or "lsof -X" to find what process keeps the file open, but if it's a momentary thing then maybe you'll never catch it. Even if you catch it, you have to understand how to alter the behaviour either the process in question or your script.

As a workaround, though, if it happens only between HH:00:00.000000 and HH:00:00.999999 then check the time before the unlink and if it falls in this interval then sleep one second. If you post a snippet of the script that does the unlink, I'll try to come up with a syntactically correct solution.
0
 
wilcoxonCommented:
Is your directory NFS mounted (or some other remote file-system method)?  I've seen NFS do weird things (most often seemingly when it thinks a file is in use).
0
 
TintinCommented:
How are you checking that the file is complete, ie: not being written to before copying?
0
Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

 
SurranoCommented:
Tintin's right; if it is not well-defined uncomfortable events may occur. If you can influence the sw creating the files, do something like create file "somefile.someuniqueid.tmp" and once complete, i.e. right after closing the file, rename it to "somefile.someuniqueid". Moving those files around should be piece of cake.

If you can't influence the software and you don't know its exact behaviour I think the best you can do is to check whether file is still open. Also, consider moving instead of copy+unlink if it makes sense in your use case.
0
 
c11v11Author Commented:
We are using NFS. The are 160 client  writing to this NFS share at the same time. I asked the developer if there is tmp file created when the clients wrote to the NFS share.  The developer said yes.  But I did not notice this. ( the file is very small, maybe it is not very easy to see this) This only happened on the integral point clock. That is very strange.Why does this only happen on integral point clock??
0
 
SurranoCommented:
Probably because a cron job runs at the same time, affecting either the files in question or the nfs share itself.
0
 
c11v11Author Commented:
This is the problem. Thanks.
0

Featured Post

Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Tackle projects and never again get stuck behind a technical roadblock.
Join Now