Link to home
Start Free TrialLog in
Avatar of PaulCaswell
PaulCaswellFlag for United Kingdom of Great Britain and Northern Ireland

asked on

Strange slowdown for fread.

Hi All,

I'm not sure if this is a bug or not but it is intriguing.

I have a tool (my code) which searches for duplicate records within a file and between one file and an archive of up to 10 files.

Each file is between 80,000 and 150,000 records long and between 25mb and 50mb in size.

Duplicate checking is only required for the first 90 bytes of the record.

To avoid sorting the files I generate an index file for each data file containing the absolute offset into the data file for each record in sorted order.

The internal duplicate (look for duplicates within the file) check consistently takes just a few seconds. The external check (against archived files) takes around the same time when there are less than 5 archives to process but when 6 or more files are to be processed performance drops to a crawl!!!

Careful use of ftime has narrowed the time waster down to fread.

When running at full speed, fread takes an undectable amount of time to run with the occasional 15ms every 2000-4000 calls, probably due to cacheing by the OS.

When running like a dog, fread can consistently take 15ms every other invocation and sometimes take 30ms or 45ms.

What could make this happen? How can I avoid it?

Process Monitor reports CPU usage at around 90% when running at full speed and around 2% when running slow. No other processes show a significant amount of cpu usage but the PC runs very slowly when the checker is running slow.

I get the same kind of results when running on a quad-core 2tb-raid server as on a dual-core desktop.

Any ideas?
Avatar of Infinity08
Infinity08
Flag of Belgium image

The first idea would be that you're overloading the bandwidth to the hard-disk. What kind of hard disk do you have ? Can you monitor the hard disk read/write performance while running the tool at full speed ?

What would make this worse, is having several files open at the same time to which you are reading/writing alternately. The hard disk cache will have trouble keeping up, because it will have to switch sectors all the time ...
>> What would make this worse, is having several files open at the same time to which you are reading/writing alternately.

btw, from what I understand, that's exactly what you do, no ?
Avatar of PaulCaswell

ASKER

On disk I/O: Sysinternals process explorer shows no significant disk access for ANY process while the code runs slowly, however, the system is significantly non-reactive.

I am careful about how many file I hold open at any time. I am confident that I never hold more than 6 files open at at once. Two indexes, two data files and perhaps a couple of log files.

The really wierd thing is that I can invoke it with 5 archived files and it runs at full speed for ALL files, but run it agains 6+ it runs like a dog against ALL files. Each archive file is processed separately using a FindFirstFile/FindNextFile loop.

Are there potential side-effects to holding two FindFirstFile/FindNextFile loops running at the same time?

Thanks for your thoughts.

Paul
>>>> probably due to cacheing by the OS.

Maybe you answered the question by yourself. IMO, in case of running slowly the caches won't find a hit and have to read from disk again. It maybe disk caches if using one physical disc for all these files or it maybe that the stream buffers were reloaded from page file. The low CPU doesn't give a hint cause in both cases it is the (additional) disk i/o what makes it slow. You could try to open half of the archives from a network disk (best connected via a giga-bit network). If it slows down with the sixth archive as well, the problem probably is the prog and not the disc.

>>>> I get the same kind of results when running on a quad-core
>>>> 2tb-raid server as on a dual-core desktop.
Then the disc driver was not the reason.

I assume there is no multi-threading involved when accessing the archive files?  You simply check the same record against all all archive files? If so, did you try to check only 5 archive files at a time and do the rest with a separate run? Is that an option?

>>>> fread can consistently take 15ms every other invocation
Are you measuring by GetTickCount? It has a 15/16 ms accuracy only.

>>>> and an archive of up to 10 files.
Is there a reason why the archives are not appended to one big file? Or maybe a fast DB like Berkeley DB is an option. They have a very fast keyed access to their records with very little overhead.


Regards, Alex
>> Each archive file is processed separately using a FindFirstFile/FindNextFile loop.

So, you only start processing the second archived file when the first has finished, etc. ? When it comes to disk caching, there shouldn't be a difference between processing 5 or 6+ files.

Is it always the same 6th file ? Is something odd with the file (corrupted maybe, or ...) ?

What happens if you have two loops, each processing 5 files, with a pause between them ?

Just wildly guessing here ;)
SOLUTION
Avatar of evilrix
evilrix
Flag of United Kingdom of Great Britain and Northern Ireland image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Alex,

>> ... the caches won't find a hit and have to read from disk again ... try to open half of the archives from a network disk ...

Good idea, and a brilliant test. I'll try that.

>>> ... no multi-threading involved ...

correct

>>> did you try to check only 5 archive files at a time and do the rest with a separate run?

A possibility, but I would prefer to truly understand what is going on.

>>>> Are you measuring by GetTickCount? It has a 15/16 ms accuracy only.

Good point. I will switch to a finer timer.

>>>> Is there a reason why the archives are not appended to one big file? ...

Yup! They must be held completely intact, and I really dont want to add a database to the mix with all its admin headaches.

Infinity08

>>>> So, you only start processing the second archived file when the first has finished, etc. ? When it comes to disk caching, there shouldn't be a difference between processing 5 or 6+ files.

My point exactly.

>>>> Is it always the same 6th file ? Is something odd with the file (corrupted maybe, or ...) ?

Nope. Sometimes 7th, sometimes 5th. Files are not corrupt and I have checked my indexing code for potential signage issues, but I will check it again. There is a different set of files each day in a rolling set. This issue shows on every run.

>>>> What happens if you have two loops, each processing 5 files, with a pause between them ?

See above, but worth a try.

evilrix,

The problem you describe looks almost exactly like what I see. Perhaps it is more common than I thought. I will check with PerfMon. Have you tried switching off your anti-virus while your build is in progress?

All,

I think this is more than a simple cacheing issue. Process Monitor points no finger at who is taking the time, which suggests something behind the scenes. System Idle has 99%, disk I/O shows a short blip at the start of each file check but drops to zero almost immediately. Disk light flickers every few seconds.  When there are less than 5 files the graphs look just as expected, loads of disk I/O, 99% cpu usage, disk light on solid.

This is the kind of effect I would expect if there was too much cache.

Any more ideas?

Paul
The really wierd part is, if there are more than 6 files to process, ALL of them are slow, less than five, ALL of them are optimal.

Paul
>> anti-virus while your build is in progress
Actually, I work for the AV company :)  The dev tree is excluded from the AV's on-access scanner.
ASKER CERTIFIED SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Hi Greg,

My test setup has 1gb ram, the live setup has 8gb. I dont go anywhere near those limits. I am seeing the same effect on both.

There is little allocation/freeing of memory while the process is running.

Do you think it is possible that I am corrupting the disk cache through a dangling pointer? I would have thought that would be caught.

evilrix,

PerfMon does show memory activity and disk activity maxed out, cpu insignificant.

Could I be going too fast? Perhaps a few ms of sleep between each read may help. I'll try that too.

Paul

>> PerfMon does show memory activity and disk activity maxed out, cpu insignificant
This is completely consistent with my problem!
>>>> Could I be going too fast? Perhaps a few ms of sleep between each read may help.
A sleep will throw your process out of schedule. It should have no impact on disk/io as it runs asynchronously.

Did you check eventvwr for some disk problems (though that wouldn't fit to same issue when using the raid)?

Are you really sure that there is no anti-virus scanner active? I experienced such kind of slowness when a scanner checked all my temporary files  while I was compiling a huge project.
>>>> To avoid sorting the files I generate an index file for
>>>> each data file containing the absolute offset into the
>>>> data file for each record in sorted order.
Aside that the problem should be solved somehow you should consider to using a database for your prog. Both BerkeleyDB and SqlLite don't need significant administration as they work with local database files. The efforts are less than the indexing you do now.
BTW, the index files were open as well? Or did you load them completely to memory? Did you ever try with smaller archive files? Or bigger ones? What is the used memory and size of the pagefile?
>> The really wierd part is, if there are more than 6 files to process, ALL of them are slow, less than five, ALL of them are optimal.

That is indeed the weirdest part : how can the first processed file be impacted by files that will be processed way later ? There's a time continuum problem somewhere heh.

That's why I don't think it's a cache issue. Is the generated code different for 5 files than for 6+ files ? Maybe the optimizer has done a very poor job, although it would need to be VERY poor for this kind of behavior.
Avatar of grg99
grg99

you might be fragmenting memory.  go to the task manager when it's slow and peek at the performance tab.    and the processes tab, sorted by memory usage.  
I will certainly consider using a lightweight database tool in the future.

The indexes are left on-disk. I will see what happens if I hold them in-memory for speed.

Paul
OK! I seem to have removed the problem by cacheing my index entries. I chose not to cache the whole file, just 500 entries at a time.

So, yet again, I have to add code to work around OS issues. Yukk.

Paul