Q1) What is exact meaning of CPU time and Wall time? When I am checking for the performance of the algorithm which one should be used?

Q2) For example in the below code,

//partial code begins here
before_time = (long) time(NULL);
before_clock = (double) clock();

//Matrix Multiplication
for (i = 0; i < ROW_A ; i++){
        for (j = 0; j < COL_B; j++){
           sum = 0;
                for (k = 0; k < COL_A; k++)
                        sum += A[i][k] *B[k][j];
                C[i][j] = sum;
after_time = (long) time(&before_time);
after_clock = (double) clock();

is that
CPU time = after_clock - before_clock
Wall time = after_time - before_time

kindly clarify me.
Who is Participating?

[Webinar] Streamline your web hosting managementRegister Today

Jose ParrotConnect With a Mentor Graphics ExpertCommented:
To measure the algorithm performance you should use CPU time by calling clock().
Wall time is the total time including other things not related to the algorithm, most related to the Operating System and services around: refreshing, pooling, window looping, graphics pipeline, swapping, garbage collection.
This is why Wall Time > CPU time.
Ray PaseurConnect With a Mentor Commented:
CPU time is typically processor-only time - ie: the time spent executing code.  Wall clock time is just that - watch the program run and time it with an external clock.

In the code example you posted, if the processor is relatively lightly loaded, the CPU time and Wall Clock time (also called Elapsed Time) are likely to be nearly the same.  Wall clock time will tend to exceed CPU time when the processor is heavily loaded, and gives your programs only small slices of the available CPU.  Wall time will also exceed CPU time when the algorithm does lots of I/O operations.  Whereas CPU time is measured in nanoseconds, I/O service time is measured in milliseconds.

Best regards, ~Ray
Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

Jose ParrotGraphics ExpertCommented:
Some few clarifications:
1. Wall clock does'nt tend to surpass CPU time. The first is allways bigger than CPU time, despite the computing load.
2. The CPU gives small slices of availabity only if there are others running heavy programs. The common condition in Windows PCs is the CPU to be 90% or more available to be used by your own program. Check it by doing Ctrl-Alt-Del and see at  Process tab how much is the System Idle Process. So the CPU gives most of its slices of time to the program.
3. CPU time isn't measured in nanoseconds. The motherboard circuitry isn't capable of so small time resolution.
4. The clock() function, used to measure CPU time, returns a determinated number of ticks to complete 1 second. The CLOCKS_PER_SEC macro result is such number. If you need CPU time in milliseconds, should find the difference of two clock() results, divide by CLOCKS_PER_SEC and multiply by 1000.
5. If the algorithm does lots of I/O operations which are part of the algorithm, they should be considered in the algorithm time. If I/O operations are performed without explicit calling by your algorithm then it must be not considered. Example: your program works with huge arrays, you don't have enough memory to store them and the the Operating System is forced to map some memory in the disk. In this example, CPU time and wll time will be considerably different.
6. I/O services aren't measured in milliseconds. Disk R/W operations are measured in MB/s; ethernet operations are measured in Mbps, serial port I/O is measured in baud. In any of these cases, the time is calculated in function of the rates and the amount of Bytes, bits or blocks, dependindg on the protocol.
Ray PaseurCommented:
Jose: Very exhaustive explanation, thanks.  It includes a lot of assumptions, as must any answer to such a broad question.  

To your point 1, one's ability to discern whether wall clock time exceeds CPU time will be dependent on the method and accuracy of measurement.  It may be noticeably different or imperceptibly different.  It may also be not meaningfully different, as in an algorithm that does nothing but loop the CPU.  

To your point 3, the motherboard's ability to resolve time has nothing to do with cycle speed.  Modern processors work within nanosecond tolerances.  You can look it up.

To your point 6, you might want to go back to my post that "I/O service time" - which is not the same as data transfer - is measured in milliseconds, at least by those who understand measurement of these things.  The point is that data access from even the fastest disk is orders of magnitude slower than data access from processor cache or electronic memory.
Jose ParrotGraphics ExpertCommented:
In overview, I agree with all 3 points you have post a comment to.
What I have noticed is that all the author seeks is for an understanding on these two measurement methods and what is the adequate one to check the performance of an algorithm.
About point 1, yes, the times can be so closer that seems to be equal. But in absolute terms, the time spent by the operating system (including the clock() system call itself) isn't part of the algorithm, so should be discounted from wall time.
About point 3, we must consider that different computers and operating systems vary wildly in how they keep track of CPU time. Anyway, clock() function isn't a time counter, but a tick counter instead, say, how many clock cycles occurred between calls to it. Example: in a 2 GHz Pentium there are 2*10^9 clocks per second, or 0.5 nanoseconds period. But such frequency is nominal, so two clocks aren't precisely 1 nanosecond. That is why POSIX determines such precision to 1 microsecond, or 1000 nanoseconds.
About point 6, if the I/O is part of the algorithm, must be considered, despite being disk or pendrive. If I/O service time is spent by the O.S., should not be considered in the time of the algorithm.
What I think the author waits for is:
"To measure the algorithm performance you should use CPU time by calling clock()".
When we look for time complexity, as in big O notation, the absolute time isn't relevant, what matters is how much a given algorithm time grows: O(n)? O(log n)? O(2^n)?
To such purpose, the number of tick clocks is good enough.
Ray PaseurCommented:
@compfixer101: I think Jose and I answered the original question.  Any reason why we should not have a split of the points?  No big deal, just asking. Thanks, ~Ray
All Courses

From novice to tech pro — start learning today.