Any problem with unlinking a file without closing it

Posted on 2004-09-17
Last Modified: 2010-04-15
         Will there be any problem if a file is not closed before unlinking it.
For eg, if we have a file called myFile,

myfilePointer = fopen(myFile, ...);

/* do some read/write processing */


Actually, this is how it is in some code and when run, found a message like "Too many open files" in the log. So i doubt its because of not closing the file though we are unlinking it.. Am I right? or is there any other reason?
Question by:dkamdar
LVL 86

Expert Comment

ID: 12087727
The worst thing that could happen is that 'unlink()' will fail.

Author Comment

ID: 12088074
Additional Info: I was getting an error while opening a file and the error message is as mentioned earlier "Cannot open file. Too many open files". The above pattern (i.e opening the file , and not closing it) is inside a while loop and it loops for long.
LVL 86

Accepted Solution

jkr earned 500 total points
ID: 12088312
That explains the problem. Just make it read

myfilePointer = fopen(myFile, ...);

/* do some read/write processing */


the number of file descriptors will be exhausted at some point, unlinking a file will not close it automatically. Closing file descriptors is not only good practise, it is a sheer *must*
NFR key for Veeam Backup for Microsoft Office 365

Veeam is happy to provide a free NFR license (for 1 year, up to 10 users). This license allows for the non‑production use of Veeam Backup for Microsoft Office 365 in your home lab without any feature limitations.

LVL 22

Expert Comment

ID: 12089161
This may be operating system dependent.  

In general, you MUST close files, otherwise they stay open and use up system resources.

It's just possible though, that in the particular OS the prorgam was written on, unlinking a file jsut happens to do an effective flcose().

That is NOT true in Windows NT or Unix, an unlinked file still exists until the last user has closed it.

I woul ddo as jkr suggests and explicitly close the file, unless there's some overwhelming reason to do otherwise.

LVL 23

Expert Comment

ID: 12089436
The practice of unlinking opened files is common on Unix systems.
Typically it is used for temporary [scratch] files.  The files disappear
as soon as they are closed or the process exits (even premature exits).

However, your condition seems more like a programming error, leaking
a system resource (in this case a file handle).


Expert Comment

ID: 12089977

From the man pages for unlink:
       unlink  deletes  a  name from the filesystem. If that name
       was the last link to a file and no processes have the file
       open  the  file  is  deleted and the space it was using is
       made available for reuse.

       If the name was the last link to a file but any  processes
       still have the file open the file will remain in existence
       until the last file descriptor referring to it is  closed.

       If  the  name  referred  to  a  symbolic  link the link is

refer to your compiler's documentation for unlink().
For e.g.,my dos compiler's(turboC) help says to make sure that the file is closed before calling unlink() on it.

In your case atleast,the unlink() call does not release the file handle so close the file before unlinking it.

As grg said,this IS OS dependent.
Here's an example:

Its always safer to close a file before unlinking it,otherwise you'd be relying on unlink() to release the file handle which is not a sure possibility.

Expert Comment

ID: 12091067
I guess *nix and Windows semantics differ here. On Win*, you cannot delete (or even rename) a file that is open by a process. On *nix, you can, and the file will be removed from the filesystem (ie unlinked), but will be a valid "storage area" for whichever processes hold a handle to it. (Note that, similarly, you can delete a process' main executable after it has been read and launched in *nix, whereas you cannot even hope to rename the file until the process has exited on Windows)

Although it is a valid behaviour to unlink a file without releasing its handle (ie closing it) in *nix, I suspect it is not the behaviour you want in this case. On top of that, you are in a loop, which means that there are lots of files being deleted but left open. Too much of anything is a problem; just think what would happen if the OS allocated 4kb of cache for every file! And actually this is the problem that is giving you the error message, you are excedding the "number of maximum file descriptors [files, pipes, sockets, etc] open by a process at any given time" limit hard-coded into both gcc/stdlib and the OS/kernel itself. Changing them is a solution, but cleaning up the code and having it close the files is a much more clean one.

Featured Post

Free Tool: IP Lookup

Get more info about an IP address or domain name, such as organization, abuse contacts and geolocation.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Have you thought about creating an iPhone application (app), but didn't even know where to get started? Here's how: ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Important pre-programming comments: I’ve never tri…
Examines three attack vectors, specifically, the different types of malware used in malicious attacks, web application attacks, and finally, network based attacks.  Concludes by examining the means of securing and protecting critical systems and inf…
Video by: Grant
The goal of this video is to provide viewers with basic examples to understand and use nested-loops in the C programming language.
The goal of this video is to provide viewers with basic examples to understand opening and reading files in the C programming language.

792 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question