execution time of  a CODE??????

Posted on 2006-07-08
Last Modified: 2012-05-05

I am studying computer architecture by my self  (distance learning). I am reading a book and found a question at the end about finding exceution time of code. The book does not explain this and there is no mention at all about this. With the following info can you obtain the execution time os a program:

-The number of cycles for each instruction.
-GH clock

so the question I need is a formula if there is one or how to calculate the execution time?
I hope you can help me.
Thanks a lot.

Question by:jean11
LVL 32

Assisted Solution

jhance earned 40 total points
ID: 17066516
In the old days (i.e. until the last few years) it was rather easy to determine how many clock cycles a program or a section of code would take.  CPU manufacturers published instruction sets and clock cycles/instruction and so all you had to do was add up the total and you had your answer.

Today, however, it's MUCH more complicated.  Modern processors use many techniques such as pipelining, speculative execution, branch prediction, multiple execution units, etc. as well as internal caches that make the prediction of execution time on an instruction-by-instruction basis very difficult if not impossible.  There is no longer a fixed relationship between an instruction and the number of clock cycles it takes.

What is done now is to look at execution at a macro level by using profiling tools.  These tools set hardware breakpoints at particular places in the code and then use timers to measure the execution time.  You can do a crude job of this yourself by using time functions in your code to keep track of execution times.

Author Comment

ID: 17066886
ok can explain more about calculating this in the old days?
What is the formula please?

add up the total of what?

LVL 70

Assisted Solution

garycase earned 40 total points
ID: 17066986
The concept is simple:   A program executes by executing a specific set of instructions.   If you know what all of these instructions are, and know how many clock cycles each of the instructions take;  you simply add up how many cycles it takes to fully execute the program -- and that tells you exactly how long it takes.   The "formula" is simply Execution Time = (time for instruction #1) + (time for instruction #2) + (time for instruction #3) + ...      Note that real-life programs have thousands (or even millions) of instructions !!

Simple concept -- not so simple to actually do, however:   When programs were MUCH smaller it wasn't so bad;  and in fact even as the programs grew a lot larger, there were tools that allowed the calculation to be automated  (Many good symbolic debuggers would execute a sequence of code, and keep track of the actual number of execution cycles the instructions required).

However ... as jhance noted, modern PC's now have much more complicated execution paths, which can allow more than one instruction to be executing at the same time;  and modern memory architectures do not require a fixed number of cycles to access an addressed memory element (is it in the cache?  ... if not is it the next reference from the last? ... if not then is it in the same bank as the last reference? ... etc. => each of these takes a different amount of time).   The result is a MUCH more complex process to definitively computer the boundaries of possible execution time.   It is still possible (and with the right debugging tools not too hard) to computer the "worst case" execution time -- all sequential execution;  no cached memory references;  but there is a fairly wide range of possibilities.   So, as jhance noted, the best approach is to use application profiling, where real-time measurements are made of the time a process starts and ends -- usually many times with an average execution time computed for the process.
LVL 11

Accepted Solution

CarlosMMartins earned 45 total points
ID: 17089179
Pretty much covered, but I'l just add try and show a more "practical" side.

On simpler CPUs, you have a table showing how much cycles each instruction will take.
for example:

ADD Rn,Rm  (add two registers)                1 cycle
MOV Rn,Rm (move register to register)      1 cycle
MOV Rn,@addr (move register to memory) 3 cycles

You can then go through your routines and check how much time it will take to execute.
You also need to know the frequency of that instruction cycle, which is related to the CPU clock but might be different from it. Some CPUs have instruction cycles 1/2 or 1/4 of the input clock, others are 1:1, and others yet might even be faster 2x and 4x the input frequency.

On RISC architectures, most instructions take the same number of cycles (1 or 2).
On CISC architectures, execution time tens to vary a lot, having instructions that can take 2 cycles, and others that take dozens of cycles and more.

Some years ago I had to make a routine that had to execute in the same amount of time, no matter the execution flow, and I had to count those cycles, and insert NOPs in some IF branches, so it would always be in sync. :)

Featured Post

The Eight Noble Truths of Backup and Recovery

How can IT departments tackle the challenges of a Big Data world? This white paper provides a roadmap to success and helps companies ensure that all their data is safe and secure, no matter if it resides on-premise with physical or virtual machines or in the cloud.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Suggested Solutions

Does your iMac really need a hardware upgrade? Will upgrading RAM speed-up your computer? If yes, then how can you proceed? Upgrading RAM in your iMac is not as simple as it may seem. This article will help you in getting and installing right RA…
I use more than 1 computer in my office for various reasons. Multiple keyboards and mice take up more than just extra space, they make working a little more complicated. Using one mouse and keyboard for all of my computers makes life easier. This co…
This Micro Tutorial will teach you how to censor certain areas of your screen. The example in this video will show a little boy's face being blurred. This will be demonstrated using Adobe Premiere Pro CS6.
Although Jacob Bernoulli (1654-1705) has been credited as the creator of "Binomial Distribution Table", Gottfried Leibniz (1646-1716) did his dissertation on the subject in 1666; Leibniz you may recall is the co-inventor of "Calculus" and beat Isaac…

810 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question