Inconsistent timing using clock_t

Part of my program is something like this:

{
....
//encoding timer starts
enc_start = clock();

Flush_Buffer(tab->len);

//encoding timer stops
enc_finish = clock();
duration[2] += (double)(enc_finish - enc_start);

//At the end of the program, i print out the total timing
duration[2] /= CLOCKS_PER_SEC;
printf("\nEncoding time = %2.3f secs", duration[2]);
....
}

The problem is that I'm getting different timing everytime I run the program. And the difference can be relatively big.
Is there any way to get consistent timing? or other method other than averaging the timing?

Thank you
m1ck3yAsked:
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

x
 
KrypConnect With a Mentor Commented:
> The problem is that I'm getting different timing everytime I run the program.
Your program is not the only thing happening on your machine - unless its DOS
Any kind of windows / unix operating system has a number of background tasks which all take time.

What does Flush_Buffer() do?
If it accesses large blocks of memory, then at best it will have to wait for the processor to refill the data cache.
At worst it will have to wait for the OS to reload the correct page from the swap file.

If it's performing any kind of I/O, then it will definitely have to wait for the hard disk.  Somewhere along the line, you have to wait for the disk head to move to the right track, then wait for the right sector to rotate round until its underneath the head before data can be read/written.  The average seek time (move the head + wait for the rotation) of a disk is around 10ms - thats 10,000,000 clock ticks of your 1Ghz machine (an awful lot of instructions).

All of these add a random delay to the execution of your code, and if you add enough small (and random) delays to the execution of your code, this can add up to what appears to be a large overall difference.


0
 
dimitryCommented:
Can you provide an example of durations ?
Your method is pretty good. clock() has resolution in DOS ~ 55 ms and in Windows ~ 1 ms.
I suggest you to calculate your durations in clocks and not in seconds - you will not need to divide and to loose precision.
0
 
gj62Commented:
When you say the differences are relatively large, how large?  How long does the process last (less than a second, several seconds, etc).

Timing is affected by many things, especially on Windows - system processes, other processes, etc.  The other thing that can throw off timing is if you run in debug mode.  Compile without debug, close all non-needed processes, and see if that helps.

0
Prepare for an Exciting Career in Cybersecurity

Help prevent cyber-threats and provide solutions to safeguard our global digital economy. Earn your MS in Cybersecurity. WGU’s MSCSIA degree program curriculum features two internationally recognized certifications from the EC-Council at no additional time or cost.

 
KocilCommented:
Does the Flush_Buffer() do I/O to the disk ?
That may give you different timing because
seeking, chacing, fragmentation, etc.
0
 
Kent OlsenData Warehouse Architect / DBACommented:

Your platform could be an issue, too.

I seem to recall that the DOS clock on the older PCs had a granularity or 7/100ths or 14/100ths of a second.



Kdo
0
 
jmcgOwnerCommented:
Nothing has happened on this question in over 7 months. It's time for cleanup!

My recommendation, which I will post in the Cleanup topic area, is to
accept answer by Kryp.

PLEASE DO NOT ACCEPT THIS COMMENT AS AN ANSWER!

jmcg
EE Cleanup Volunteer
0
All Courses

From novice to tech pro — start learning today.