Solved

Strange slowdown for fread.

Posted on 2007-11-28
19
561 Views
Last Modified: 2013-11-25
Hi All,

I'm not sure if this is a bug or not but it is intriguing.

I have a tool (my code) which searches for duplicate records within a file and between one file and an archive of up to 10 files.

Each file is between 80,000 and 150,000 records long and between 25mb and 50mb in size.

Duplicate checking is only required for the first 90 bytes of the record.

To avoid sorting the files I generate an index file for each data file containing the absolute offset into the data file for each record in sorted order.

The internal duplicate (look for duplicates within the file) check consistently takes just a few seconds. The external check (against archived files) takes around the same time when there are less than 5 archives to process but when 6 or more files are to be processed performance drops to a crawl!!!

Careful use of ftime has narrowed the time waster down to fread.

When running at full speed, fread takes an undectable amount of time to run with the occasional 15ms every 2000-4000 calls, probably due to cacheing by the OS.

When running like a dog, fread can consistently take 15ms every other invocation and sometimes take 30ms or 45ms.

What could make this happen? How can I avoid it?

Process Monitor reports CPU usage at around 90% when running at full speed and around 2% when running slow. No other processes show a significant amount of cpu usage but the PC runs very slowly when the checker is running slow.

I get the same kind of results when running on a quad-core 2tb-raid server as on a dual-core desktop.

Any ideas?
0
Comment
Question by:PaulCaswell
  • 6
  • 4
  • 4
  • +2
19 Comments
 
LVL 53

Expert Comment

by:Infinity08
Comment Utility
The first idea would be that you're overloading the bandwidth to the hard-disk. What kind of hard disk do you have ? Can you monitor the hard disk read/write performance while running the tool at full speed ?

What would make this worse, is having several files open at the same time to which you are reading/writing alternately. The hard disk cache will have trouble keeping up, because it will have to switch sectors all the time ...
0
 
LVL 53

Expert Comment

by:Infinity08
Comment Utility
>> What would make this worse, is having several files open at the same time to which you are reading/writing alternately.

btw, from what I understand, that's exactly what you do, no ?
0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
On disk I/O: Sysinternals process explorer shows no significant disk access for ANY process while the code runs slowly, however, the system is significantly non-reactive.

I am careful about how many file I hold open at any time. I am confident that I never hold more than 6 files open at at once. Two indexes, two data files and perhaps a couple of log files.

The really wierd thing is that I can invoke it with 5 archived files and it runs at full speed for ALL files, but run it agains 6+ it runs like a dog against ALL files. Each archive file is processed separately using a FindFirstFile/FindNextFile loop.

Are there potential side-effects to holding two FindFirstFile/FindNextFile loops running at the same time?

Thanks for your thoughts.

Paul
0
 
LVL 39

Expert Comment

by:itsmeandnobodyelse
Comment Utility
>>>> probably due to cacheing by the OS.

Maybe you answered the question by yourself. IMO, in case of running slowly the caches won't find a hit and have to read from disk again. It maybe disk caches if using one physical disc for all these files or it maybe that the stream buffers were reloaded from page file. The low CPU doesn't give a hint cause in both cases it is the (additional) disk i/o what makes it slow. You could try to open half of the archives from a network disk (best connected via a giga-bit network). If it slows down with the sixth archive as well, the problem probably is the prog and not the disc.

>>>> I get the same kind of results when running on a quad-core
>>>> 2tb-raid server as on a dual-core desktop.
Then the disc driver was not the reason.

I assume there is no multi-threading involved when accessing the archive files?  You simply check the same record against all all archive files? If so, did you try to check only 5 archive files at a time and do the rest with a separate run? Is that an option?

>>>> fread can consistently take 15ms every other invocation
Are you measuring by GetTickCount? It has a 15/16 ms accuracy only.

>>>> and an archive of up to 10 files.
Is there a reason why the archives are not appended to one big file? Or maybe a fast DB like Berkeley DB is an option. They have a very fast keyed access to their records with very little overhead.


Regards, Alex
0
 
LVL 53

Expert Comment

by:Infinity08
Comment Utility
>> Each archive file is processed separately using a FindFirstFile/FindNextFile loop.

So, you only start processing the second archived file when the first has finished, etc. ? When it comes to disk caching, there shouldn't be a difference between processing 5 or 6+ files.

Is it always the same 6th file ? Is something odd with the file (corrupted maybe, or ...) ?

What happens if you have two loops, each processing 5 files, with a pause between them ?

Just wildly guessing here ;)
0
 
LVL 40

Assisted Solution

by:evilrix
evilrix earned 50 total points
Comment Utility
>> On disk I/O: Sysinternals process explorer shows no significant disk access for ANY process while the code runs slowly, however, the system is significantly non-reactive.
I have exactly the same problem when building a very large project using SCons (different scenario, same symptoms). Process Explorer shows the CPU is idle and no significant disk access; however, Perfmon (using physical disk % time counters) clearly shows that disk is being accessed 100% of the time although the HD light does not flicker either! The machine itself becomes wholly unresponsive, which I have concluded is probably down to the fact that the IO bottle neck is preventing pages from being swapped. Curiously, building the exact same project under Linux also using SCons results in no significant performance issue. I can only assume this is something specific to the caching mechanism Windows implements, to which Alex and I8 have already eluded. Of course, your problem may be wholly unconnected but I thought I'd mention it anyway as it might help guide others to a potential solution for you.

As an aside, if you are looking to implement a DB solution you could do worse than look at SQL Lite. http://www.sqlite.org/
0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
Alex,

>> ... the caches won't find a hit and have to read from disk again ... try to open half of the archives from a network disk ...

Good idea, and a brilliant test. I'll try that.

>>> ... no multi-threading involved ...

correct

>>> did you try to check only 5 archive files at a time and do the rest with a separate run?

A possibility, but I would prefer to truly understand what is going on.

>>>> Are you measuring by GetTickCount? It has a 15/16 ms accuracy only.

Good point. I will switch to a finer timer.

>>>> Is there a reason why the archives are not appended to one big file? ...

Yup! They must be held completely intact, and I really dont want to add a database to the mix with all its admin headaches.

Infinity08

>>>> So, you only start processing the second archived file when the first has finished, etc. ? When it comes to disk caching, there shouldn't be a difference between processing 5 or 6+ files.

My point exactly.

>>>> Is it always the same 6th file ? Is something odd with the file (corrupted maybe, or ...) ?

Nope. Sometimes 7th, sometimes 5th. Files are not corrupt and I have checked my indexing code for potential signage issues, but I will check it again. There is a different set of files each day in a rolling set. This issue shows on every run.

>>>> What happens if you have two loops, each processing 5 files, with a pause between them ?

See above, but worth a try.

evilrix,

The problem you describe looks almost exactly like what I see. Perhaps it is more common than I thought. I will check with PerfMon. Have you tried switching off your anti-virus while your build is in progress?

All,

I think this is more than a simple cacheing issue. Process Monitor points no finger at who is taking the time, which suggests something behind the scenes. System Idle has 99%, disk I/O shows a short blip at the start of each file check but drops to zero almost immediately. Disk light flickers every few seconds.  When there are less than 5 files the graphs look just as expected, loads of disk I/O, 99% cpu usage, disk light on solid.

This is the kind of effect I would expect if there was too much cache.

Any more ideas?

Paul
0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
The really wierd part is, if there are more than 6 files to process, ALL of them are slow, less than five, ALL of them are optimal.

Paul
0
 
LVL 40

Expert Comment

by:evilrix
Comment Utility
>> anti-virus while your build is in progress
Actually, I work for the AV company :)  The dev tree is excluded from the AV's on-access scanner.
0
How to improve team productivity

Quip adds documents, spreadsheets, and tasklists to your Slack experience
- Elevate ideas to Quip docs
- Share Quip docs in Slack
- Get notified of changes to your docs
- Available on iOS/Android/Desktop/Web
- Online/Offline

 
LVL 22

Accepted Solution

by:
grg99 earned 200 total points
Comment Utility
Sounds like you're blowing your disk cache.  Things will seem mighty speedy at first as long as all the data can fit in the cache.  But eventually when you start reading file #7, the disk  cache will toss out the least recently used blocks, probably from file #1.

THat's fine if you don't ever go back to #1, but it's really bad if you're doing some round-robin processing-- the dumb disk cache algorithms will toss out just exactly what you're going to need next!! Sigh.

Also you did not mention how much memory you're using.   If your program is leaing or intentionally allocating memory, that comes out of the RAM allocated to the disk cache.


Or worse, VM kicks in and starts swapping RAM to disk, which is exactly the most pointless thing to do, exactly the opposite of what the disk cache is trying to do.

The solution is to check your program for leaks, or tighten up on any memory usage you're accumulating.


If that's not enough, you may benefit from another GB of RAM.   Under $50 these days.



0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
Hi Greg,

My test setup has 1gb ram, the live setup has 8gb. I dont go anywhere near those limits. I am seeing the same effect on both.

There is little allocation/freeing of memory while the process is running.

Do you think it is possible that I am corrupting the disk cache through a dangling pointer? I would have thought that would be caught.

evilrix,

PerfMon does show memory activity and disk activity maxed out, cpu insignificant.

Could I be going too fast? Perhaps a few ms of sleep between each read may help. I'll try that too.

Paul

0
 
LVL 40

Expert Comment

by:evilrix
Comment Utility
>> PerfMon does show memory activity and disk activity maxed out, cpu insignificant
This is completely consistent with my problem!
0
 
LVL 39

Expert Comment

by:itsmeandnobodyelse
Comment Utility
>>>> Could I be going too fast? Perhaps a few ms of sleep between each read may help.
A sleep will throw your process out of schedule. It should have no impact on disk/io as it runs asynchronously.

Did you check eventvwr for some disk problems (though that wouldn't fit to same issue when using the raid)?

Are you really sure that there is no anti-virus scanner active? I experienced such kind of slowness when a scanner checked all my temporary files  while I was compiling a huge project.
0
 
LVL 39

Expert Comment

by:itsmeandnobodyelse
Comment Utility
>>>> To avoid sorting the files I generate an index file for
>>>> each data file containing the absolute offset into the
>>>> data file for each record in sorted order.
Aside that the problem should be solved somehow you should consider to using a database for your prog. Both BerkeleyDB and SqlLite don't need significant administration as they work with local database files. The efforts are less than the indexing you do now.
0
 
LVL 39

Expert Comment

by:itsmeandnobodyelse
Comment Utility
BTW, the index files were open as well? Or did you load them completely to memory? Did you ever try with smaller archive files? Or bigger ones? What is the used memory and size of the pagefile?
0
 
LVL 53

Expert Comment

by:Infinity08
Comment Utility
>> The really wierd part is, if there are more than 6 files to process, ALL of them are slow, less than five, ALL of them are optimal.

That is indeed the weirdest part : how can the first processed file be impacted by files that will be processed way later ? There's a time continuum problem somewhere heh.

That's why I don't think it's a cache issue. Is the generated code different for 5 files than for 6+ files ? Maybe the optimizer has done a very poor job, although it would need to be VERY poor for this kind of behavior.
0
 
LVL 22

Expert Comment

by:grg99
Comment Utility
you might be fragmenting memory.  go to the task manager when it's slow and peek at the performance tab.    and the processes tab, sorted by memory usage.  
0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
I will certainly consider using a lightweight database tool in the future.

The indexes are left on-disk. I will see what happens if I hold them in-memory for speed.

Paul
0
 
LVL 16

Author Comment

by:PaulCaswell
Comment Utility
OK! I seem to have removed the problem by cacheing my index entries. I chose not to cache the whole file, just 500 entries at a time.

So, yet again, I have to add code to work around OS issues. Yukk.

Paul
0

Featured Post

Highfive Gives IT Their Time Back

Highfive is so simple that setting up every meeting room takes just minutes and every employee will be able to start or join a call from any room with ease. Never be called into a meeting just to get it started again. This is how video conferencing should work!

Join & Write a Comment

A theme is a collection of property settings that allow you to define the look of pages and controls, and then apply the look consistently across pages in an application. Themes can be made up of a set of elements: skins, style sheets, images, and o…
Container Orchestration platforms empower organizations to scale their apps at an exceptional rate. This is the reason numerous innovation-driven companies are moving apps to an appropriated datacenter wide platform that empowers them to scale at a …
The goal of this video is to provide viewers with basic examples to understand how to create, access, and change arrays in the C programming language.
The goal of the tutorial is to teach the user how to use functions in C++. The video will cover how to define functions, how to call functions and how to create functions prototypes. Microsoft Visual C++ 2010 Express will be used as a text editor an…

772 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now