• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 2078
  • Last Modified:

need fast binary file write using c++

a program I am responsible for writes large amount of data to a binary file. The users can stop the process and exit, and later restart it. In that case, the 'old' file is opened and JUST the data portions of the previous file (excludes the header info, time stamps. etc) are copied to the next file, then the process continues to collect data and write it to file (basically appending the new data to the old). This copying of the data is the slow down for me. For example, 4000 pieces of previous data is taking about 30-40 seconds to write to the new file. During this time, the user just sits there and waits. ANYWAY, I am using Rad Studio 2007 C++ as the IDE. The command I use for the file output is the _rtl_write command. Does anyone maybe have a better way of doing this? Would old standard 'fwrite' be faster? Memory mapped files? Suggestions are appreciated.
0
BrianDumas
Asked:
BrianDumas
  • 2
  • 2
1 Solution
 
Infinity08Commented:
Can't you simply append the new data to the old file ?
std::ofstream outfile("out.bin", std::ios_base::out | std::ios_base::app);  // open the file for appending
 
// write the new data ... it will be added at the end of the file
 
outfile.close();

Open in new window

0
 
BrianDumasAuthor Commented:
No. The files have a header section that is specific to the time the file was created. The the bulk of the file is data, and finally the last section(s) of the file are a summary of the data. So, just appending won't work. Maybe something where a large block from the 'old' file can be written to the 'new' file?????
0
 
Infinity08Commented:
>> The files have a header section that is specific to the time the file was created.

Can you modify the header to fit the new time ? ie. overwrite parts of the header with the new information ?
0
 
itsmeandnobodyelseCommented:
>>>> 4000 pieces of previous data is taking about 30-40 seconds to write to the new file.

Some ideas (some taking up a notion Infinity made).

You could use CopyFile to copy the old one to a new file such using the full power of the OS. Then update the header portion only.

You could write all 4000 "pieces" using one single write, thus not needing to enlarge the current file 4000 times (it is fewer cause the filesystem will do some optimizations as well but I would assume the most time of 30-40 seconds can be spared if doing one write only (of course the disc shouldn't be fragmented).

#include <fstream>
#include <sys/stat.h>

   ifstream ifs(oldfile.c_str(), ios::in | ios::binary);
   ofstream ofs(newfile.c_str(), ios::out | ios::binary);
   struct stat fs = { 0 };
   if (stat(oldfile.c_str(), &fs) != 0) throw ("error: stat");
   char* buffer = new char[fs.st_size];
   if (buffer == NULL) throw ("error: new");
   if (!ifs.read(buffer, fs.st_size)) throw ("error: read");
   if (!ofs.write(buffer, fs.st_size)) throw ("error: write");
   ofs.seekg(0, ios_base::beg);
   if (!ofs.write(newheadrec, sizeof(newheadrec)))  ("error: write header");
   ifs.close();
   ofs.close();


You could use always the same file for target file. That way the file must not be enlarged while writing.

0
 
itsmeandnobodyelseCommented:
It should be

#include <fstream>
using namespace std;
#include <sys/stat.h>
0

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 2
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now