# execution time of a CODE??????

Hi
I am studying computer architecture by my self  (distance learning). I am reading a book and found a question at the end about finding exceution time of code. The book does not explain this and there is no mention at all about this. With the following info can you obtain the execution time os a program:

-The number of cycles for each instruction.
-GH clock

so the question I need is a formula if there is one or how to calculate the execution time?
I hope you can help me.
Thanks a lot.

Jean
###### Who is Participating?

x

Commented:
Pretty much covered, but I'l just add try and show a more "practical" side.

On simpler CPUs, you have a table showing how much cycles each instruction will take.
for example:

MOV Rn,Rm (move register to register)      1 cycle
MOV Rn,@addr (move register to memory) 3 cycles

You can then go through your routines and check how much time it will take to execute.
You also need to know the frequency of that instruction cycle, which is related to the CPU clock but might be different from it. Some CPUs have instruction cycles 1/2 or 1/4 of the input clock, others are 1:1, and others yet might even be faster 2x and 4x the input frequency.

On RISC architectures, most instructions take the same number of cycles (1 or 2).
On CISC architectures, execution time tens to vary a lot, having instructions that can take 2 cycles, and others that take dozens of cycles and more.

Some years ago I had to make a routine that had to execute in the same amount of time, no matter the execution flow, and I had to count those cycles, and insert NOPs in some IF branches, so it would always be in sync. :)
0

Commented:
In the old days (i.e. until the last few years) it was rather easy to determine how many clock cycles a program or a section of code would take.  CPU manufacturers published instruction sets and clock cycles/instruction and so all you had to do was add up the total and you had your answer.

Today, however, it's MUCH more complicated.  Modern processors use many techniques such as pipelining, speculative execution, branch prediction, multiple execution units, etc. as well as internal caches that make the prediction of execution time on an instruction-by-instruction basis very difficult if not impossible.  There is no longer a fixed relationship between an instruction and the number of clock cycles it takes.

What is done now is to look at execution at a macro level by using profiling tools.  These tools set hardware breakpoints at particular places in the code and then use timers to measure the execution time.  You can do a crude job of this yourself by using time functions in your code to keep track of execution times.
0

Author Commented:
ok can explain more about calculating this in the old days?

add up the total of what?

thanks
0

RetiredCommented:
The concept is simple:   A program executes by executing a specific set of instructions.   If you know what all of these instructions are, and know how many clock cycles each of the instructions take;  you simply add up how many cycles it takes to fully execute the program -- and that tells you exactly how long it takes.   The "formula" is simply Execution Time = (time for instruction #1) + (time for instruction #2) + (time for instruction #3) + ...      Note that real-life programs have thousands (or even millions) of instructions !!

Simple concept -- not so simple to actually do, however:   When programs were MUCH smaller it wasn't so bad;  and in fact even as the programs grew a lot larger, there were tools that allowed the calculation to be automated  (Many good symbolic debuggers would execute a sequence of code, and keep track of the actual number of execution cycles the instructions required).

However ... as jhance noted, modern PC's now have much more complicated execution paths, which can allow more than one instruction to be executing at the same time;  and modern memory architectures do not require a fixed number of cycles to access an addressed memory element (is it in the cache?  ... if not is it the next reference from the last? ... if not then is it in the same bank as the last reference? ... etc. => each of these takes a different amount of time).   The result is a MUCH more complex process to definitively computer the boundaries of possible execution time.   It is still possible (and with the right debugging tools not too hard) to computer the "worst case" execution time -- all sequential execution;  no cached memory references;  but there is a fairly wide range of possibilities.   So, as jhance noted, the best approach is to use application profiling, where real-time measurements are made of the time a process starts and ends -- usually many times with an average execution time computed for the process.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.