• C

consequences of writing to file handle of a closed file

I had a bug (since fixed), as follows.

   out_fp = fopen("RXCOVER.OUT", "wt");

   for (file_count = 0; file_count < 4; ++file_count)
    fp = open_file(fn[file_count], "rb+");
    for each record in the file fp
        read a record from fp
       if the record meets a certain condition, write a message to


     fclose(out_fp); // WRONG!! this fclose() belongs outside the
                          //  loop!!!

  } // for file_count


So, after the first iteration of the loop, out_fp was CLOSED,
yet I was still trying to write data to it.

Is it possible that since it was closed, that (bad) data could
have been written to fp??

Reason I ask is that a few records in fp became trashed.  When I looked to see what program I might have run at the time the datafile was last modified (17:06, 11/6/04), this program was
indeed run at that time.

Stephen KairysTechnical Writer - ConsultantAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Jaime OlivaresSoftware ArchitectCommented:
Why not simply put the fclose after the second for end braces?
Stephen KairysTechnical Writer - ConsultantAuthor Commented:

Thanks for the quick response.

Perhaps,  though, I did not state my question clearly.
I already fixed the bug just as you correctly indicated.

What I was wondering...when the bug was still in the
program and I was running the program on my dataset,
could the above bug have caused data in FP to
become trashed.

I'm particularly suspicious b/c of the following.

1. The program writes to OUT_FP if AND ONLY IF (for the sake
of example) a status flag in the file == 100.

2. When I look at the trashed records in the data file,
at least two of them are ONE RECORD AFTER a record
with a status flag of 100.

3. So, I'm thinking, when the program attempts to write
to the closed file, that it's somehow writing to FP instead.

Does that info help you answer my question?
Thanks again!
After you closed the file, write attempts to the file will fail.
However, if you program opened another file subsequently,
it likely acquired the same file handle.  [In fact, it must do
so for stdio to work correctly.]  In that case writing through
the stale file handle could corrupt a separate file.  This is
certainly true for the level 1 I/O APIs (open, close, read, write).

With the buffered, level 2 I/O APIs (fopen, fclose, fread, fwrite),
the situation is even more complicated.  The call to fclose()
deallocates the FILE structure that was allocated via fopen().
Using the stale FILE structure adds using freed memory to
the list of problems.


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Simple Misconfiguration =Network Vulnerability

In this technical webinar, AlgoSec will present several examples of common misconfigurations; including a basic device change, business application connectivity changes, and data center migrations. Learn best practices to protect your business from attack.

Stephen KairysTechnical Writer - ConsultantAuthor Commented:
Would fprintf() be considered level 1 or level 2 i/o? That's
what I used to write to out_fp.

Now,  I did use fopen and fclose, so I know I have
some level 2 i/o there. Can I assume your comments
about Level 1 (acquiring the same file handle, etc.) still

Finally, I "reintroduced' my bug, and dumped out the
value of the file handle. (I used %ld, I assume that
was correct?)  Surely enough, when I opened
fp for the 2nd time,  its value was the same as
out_fp "used" to have

One way to avoid this problem is to zap your copy of the file handle or file pointer after a close.  For example:

close( out_handle );  out_handle = -1;


fclose( out_file );   out_file = NULL;


That way your program will go BOOM if it ever tries to use that file handle or pointer, instead of possibly scribbling over some other file.

Stephen KairysTechnical Writer - ConsultantAuthor Commented:
Well, I tried it (for both file handles) and it did NOT go boom! :)  

In addition, the debugging code I referred to above still shows
the following (I've subsituted smaller values to make
the below more readable :)  )

Opening out_fp: value of file handle is 606

Opening fp:  value of file handle is 699

close fp and set to NULL

close out_fp and set to NULL

opening fp: the value of the file handle is 606, the
same as the prev. value of out_fp

Anyhow, I'm reasonably convinced that the above
bug could have corrupted my file, but I will
try to duplicate it tomorrow.


Sorry, I can not remember where I read "level 1" and "level 2" file I/O APIs
(maybe Microsoft doc from the '80s).

Level 1 i/o uses open(), creat(), read(), write(), lseek(), close() for block access
to a file.  open() and creat() return an integer file descriptor that is passed
to the others.  Opening a file returns a file descriptor that is the lowest number
unused file descriptor.  For instance, suppose file descriptors 0-4 are open,
and you open a new file, open() will return 5.  If you close file descriptor 2,
then open a new file, open() will return 2 as the new file descriptor.  This is
how command shells effect stdio redirection.

Level 2 i/o uses fopen(), popen(), fread(), fwrite(), fprintf(), fgets() etc, for buffered
stream access to files.  fopen() and popen() return a pointer to a FILE structure
which is used in the remaining calls.  The file structure contains the buffered
state, making it easier to read lines, individual characters, getc, ungetc, etc.
The level 2 suite uses the level 1 calls to provide the underlying access to the file,
and the file descriptor is stored in a member of the FILE structure.  If the system
fclose() does not set the embedded file descriptor to -1 after closing the file, the
structure will contain a stale file descriptor.  I would not expect fclose() to take
this precaution since it will immediately deallocate the FILE structure itself.

If you closed the file with fclose(), AND opened another file which would reuse
the previous integer file desciptor, AND you wrote through the deallocated FILE
structure, you could corrupt the newly opened file thinking you were writing to
the previously opened file.  Note that fopen() is unlikely to reallocate the same
memory for the FILE structure, however when it calls open(), that call might
return a previously closed integer file descriptor.
I'd suggest one of two possibilities:

1. The fclose is executing asynchronously. Your next attempt at writing to the closed file got in before the file was actually closed.
2. The fclose actually leaves the file open, just marking the file as closed. This might be a form of cacheing to speed up the case where you open the file again soon after.

It is quite possible to open a file, close it and open it again and discover that the file handle is the same. Especially on a windoze system where there are usually only about 10 file handles available to you by default anyway.

Generally speaking, whenever you close a file, set the handle to NULL. If only the old ANSI people had specified that fclose returned 'NULL' there would now be loads of much safer code out there that says:

FILE * fp = fopen ( ... );


fp = fclose ( fp ); // This is illegal but I wish it wasnt.

Stephen KairysTechnical Writer - ConsultantAuthor Commented:
Well, I just tried to re-create/duplicate the bug. w/o success.

Even though the file handle of the data file that was corrupted (FP) DOES have the same value
as the file handle of the output file (out_fp) that was closed, I'm not seeing the trashed
data I expected, nor is the timestamp of the data file being updated.

Is it possible  this behavior is random?

Stephen KairysTechnical Writer - ConsultantAuthor Commented:
Further update:

Now, I duplicated it.

I removed setting the file handles to NULL and now I can recreate the bug. Saw it happen
right before my eyes.  The timestamps on the 3 files being processed after the mistaken
close got updated, and the data indeed got trashed.

I'm actually relieved. This bug was in a stand-alone utilty I had written for internal testing.
I'd much rather the bug be there, as opposed to in my application itself :)

Yep, this handle-aliasing error could be fixed if the C library file handles were allocated sequentially and not reused, but there's probably too much code that depends on them being small integers.

So it still could be either of my suggestions.

I cant think of a way of determining which one if its either of them. Perhaps, if you are on a Windoze machine you could use FileMon.

Stephen KairysTechnical Writer - ConsultantAuthor Commented:
So, I guess when I was dumping out my file handles,
I should have use %d instead of %ld?
Stephen KairysTechnical Writer - ConsultantAuthor Commented:
OK. I'm satisfied that the bug in this utility program caused the other file to be trashed.
Closing issue, and thanks to everyone for their help!
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.