Any problem with unlinking a file without closing it

         Will there be any problem if a file is not closed before unlinking it.
For eg, if we have a file called myFile,

myfilePointer = fopen(myFile, ...);

/* do some read/write processing */


Actually, this is how it is in some code and when run, found a message like "Too many open files" in the log. So i doubt its because of not closing the file though we are unlinking it.. Am I right? or is there any other reason?
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

jkrConnect With a Mentor Commented:
That explains the problem. Just make it read

myfilePointer = fopen(myFile, ...);

/* do some read/write processing */


the number of file descriptors will be exhausted at some point, unlinking a file will not close it automatically. Closing file descriptors is not only good practise, it is a sheer *must*
The worst thing that could happen is that 'unlink()' will fail.
dkamdarAuthor Commented:
Additional Info: I was getting an error while opening a file and the error message is as mentioned earlier "Cannot open file. Too many open files". The above pattern (i.e opening the file , and not closing it) is inside a while loop and it loops for long.
Problems using Powershell and Active Directory?

Managing Active Directory does not always have to be complicated.  If you are spending more time trying instead of doing, then it's time to look at something else. For nearly 20 years, AD admins around the world have used one tool for day-to-day AD management: Hyena. Discover why

This may be operating system dependent.  

In general, you MUST close files, otherwise they stay open and use up system resources.

It's just possible though, that in the particular OS the prorgam was written on, unlinking a file jsut happens to do an effective flcose().

That is NOT true in Windows NT or Unix, an unlinked file still exists until the last user has closed it.

I woul ddo as jkr suggests and explicitly close the file, unless there's some overwhelming reason to do otherwise.

The practice of unlinking opened files is common on Unix systems.
Typically it is used for temporary [scratch] files.  The files disappear
as soon as they are closed or the process exits (even premature exits).

However, your condition seems more like a programming error, leaking
a system resource (in this case a file handle).


From the man pages for unlink:
       unlink  deletes  a  name from the filesystem. If that name
       was the last link to a file and no processes have the file
       open  the  file  is  deleted and the space it was using is
       made available for reuse.

       If the name was the last link to a file but any  processes
       still have the file open the file will remain in existence
       until the last file descriptor referring to it is  closed.

       If  the  name  referred  to  a  symbolic  link the link is

refer to your compiler's documentation for unlink().
For e.g.,my dos compiler's(turboC) help says to make sure that the file is closed before calling unlink() on it.

In your case atleast,the unlink() call does not release the file handle so close the file before unlinking it.

As grg said,this IS OS dependent.
Here's an example:

Its always safer to close a file before unlinking it,otherwise you'd be relying on unlink() to release the file handle which is not a sure possibility.
I guess *nix and Windows semantics differ here. On Win*, you cannot delete (or even rename) a file that is open by a process. On *nix, you can, and the file will be removed from the filesystem (ie unlinked), but will be a valid "storage area" for whichever processes hold a handle to it. (Note that, similarly, you can delete a process' main executable after it has been read and launched in *nix, whereas you cannot even hope to rename the file until the process has exited on Windows)

Although it is a valid behaviour to unlink a file without releasing its handle (ie closing it) in *nix, I suspect it is not the behaviour you want in this case. On top of that, you are in a loop, which means that there are lots of files being deleted but left open. Too much of anything is a problem; just think what would happen if the OS allocated 4kb of cache for every file! And actually this is the problem that is giving you the error message, you are excedding the "number of maximum file descriptors [files, pipes, sockets, etc] open by a process at any given time" limit hard-coded into both gcc/stdlib and the OS/kernel itself. Changing them is a solution, but cleaning up the code and having it close the files is a much more clean one.
All Courses

From novice to tech pro — start learning today.