Solved

Running Time Help (Worst and Average)

Posted on 2004-09-22
11
304 Views
Last Modified: 2010-08-05
I'm trying to find the worst and average running time for each method. Written in Java. Any help? Thanks!
1)

    public boolean remove (Object o) {
     
      if (head.element.equals(o))
        return false;
     
      Entry current = head.next;
      Entry previous = head;

      for (; current != null && !current.element.equals(o);
           current = current.next, previous = previous.next){}

      if (current == null)
        return false;
      else {
        previous.next = current.next;
        current = null;
        return true;
      }
    }

2) (this calls the remove function above)
    public boolean removeAll(Collection c) {
      Iterator itr = c.iterator();
      while (itr.hasNext())
        remove (itr.next());
      return true;
    }//method removeall


Thanks!
0
Comment
Question by:Tobster85
  • 5
  • 4
11 Comments
 
LVL 45

Expert Comment

by:sunnycoder
ID: 12130661
What platform? All platforms have some good profiling tools ... Code Test is a portable tool available for most platforms ... bit clumsy GUI but fairly good at its job. Running your program through any of these tools should give you all the information that you need
0
 
LVL 51

Accepted Solution

by:
Julian Hansen earned 500 total points
ID: 12131173
I am not sure what you are asking here but it sounds like you want to do some function profiling - so I am going to answer based on this.

There are tools to do this for you but it is quite a simple process to set up a fairly good profiling process yourself.

What I do to profile functions or applications is as follows


i) Record the system time / clock ticks since execution of application. (On Win32 platforms the GetTickCount API call will give you what you want) [startTime set prevTime = startTime]
ii) Initiate a loop that calls the same function or embodies the code that you want to profile. [execTime = currentTime - prevTime]
iii) After each iteration get the time and subtract from it the time of the previous execution.
iv) Compare the value execTime to saved Max and Min values adjusting accordingly.
v) When the loop completes save the endTime.

totalTime = endTime - startTime
avgTime = totalTime / number_of_loop_iterations
minTime = lowest execTime recorded
maxTime = highest execTime recorded

By changing the number of loop iterations you can improve the statistical reliablilty of your results.

It works for me.
0
 
LVL 45

Expert Comment

by:sunnycoder
ID: 12131413
Hi julianH,

I would not recommend that approach since the results are largely inaccurate. It does not take into account the time process sent waiting in runnable queue or resource queue just waiting. It does not take into account the time system spent processing on its behalf or on some other process's behalf.

Getting accurate profiling information is little more work than what it sounds :). There are some good profiling tools like gprof (thouhg it does not have very high resolution) or Code Test. They are worth a try.

Just my 2 cents

sunnycoder
0
 
LVL 51

Expert Comment

by:Julian Hansen
ID: 12131506
Sunnycoder,

What you say is valid for certain applications. However, not all applications behave in the way you describe and many applications can be profiled in a very simple way with a high degree of acuracy. I answered in the context of the question posed - based on the code submitted the function in question is performing some very simple list operatoins. Now, while this code may be part of another application that operates in the manner you describe it is not necessary to test the function in that environment. If what you are after is simply the approximate times for executing the code you can run the profiling method I described in a test environment where there are no other processes (or a minimal set) interfering with the results. One might argue that this is a sterile environment that has no bearing on the production implementation of the code. However, when you start bringing in the factors present in a production environment your profiling goes out the window anyway because of the variable and often random nature of these external influences.

The value of profiling your code in a test environment using the method described above is that it a) gives you a best case scenario and b) when testing various implementations of solutions you can accurately compare the performance data without having to worry about the results being tainted by outside factors beyond your control i.e. you can easily establish which implementations performs the best.

So, while I accept that there are many ways to profile an app, in the context of the question posed I believe that the method desribed above will give you the most useful information for determining which implementation of a solution performs the best.

Just my 2 cents

julianH
0
How to run any project with ease

Manage projects of all sizes how you want. Great for personal to-do lists, project milestones, team priorities and launch plans.
- Combine task lists, docs, spreadsheets, and chat in one
- View and edit from mobile/offline
- Cut down on emails

 
LVL 45

Expert Comment

by:sunnycoder
ID: 12131568
Hi JulianH,

>However, not all applications behave in the way you describe
I was talking about the behavioral interfernce caused by the system and not the application. Unless you are running a very high priority application on a Hard RTOS, there will be time slices and rescheduling and since asker sought worst case timings too, you can imagine the effect it can bear on the results.

>The value of profiling your code in a test environment using the method described above is
I was never touching upon the test environment being modified either. I was talking of the method of getting the statistics. Keep the same test enviroment, use a better method. I believe that you will be inclined to agree that in the same test environment, using a real profiling tool is likely to get more accurate and hence more usable results as compared to measuring times with presumptions that function will never be interrupted and everything will go fine all the way. Moreover, perhaps using a tool requires lesser effort as compared to writing additional code.

>desribed above will give you the most useful information for determining which implementation
>of a solution performs the best.
I agree that it is a possible solution, but, whatever little, my practical experience tells me that we can do better.

Another 2 cents added I guess :)

sunnycoder
0
 
LVL 51

Expert Comment

by:Julian Hansen
ID: 12131734
Hi Sunnycoder,

I think the point I am trying to make here is as follows. The factors introduced by the system are not predictable to a large degree and in most cases would be fairly random. Sure applications are timesliced and scheduled and interrupted - but these can contribute anything from a millisecond to several minutes or even longer because the state of the system is (or can be variable). When you want to determine the performance of an algorthim (lets separate the algorithm from the code) you have to do it in such a way that the focus is only on the algorithm and not on other factors that have nothing to do with the efficiency of the algorithm itself.

For example: I want to store data in some sort of data structure and I want to know for indexed based searches which is faster an array or a singly linked list.

Algorithm 1 (array)

if ( indx < uBoundArray and index >= 0)
  return ArrayValue[indx]

Algorthim 2 (linked list)

p = listHead
indx = 0
while p is not NULL AND indx != requiredIndx
    p = p.Next
    indx = indx + 1
loop

To find out which of these algorithms performs better I want to do it in an environment where my app is not going to be timesliced out in favour of another CPU intensive task or where system / external factors do not affect the execution time of my code. I am interested only in which algorithm performs better. So, to do this I take my code and I add time checks and I run it on a test system that has minimal external interference in my code.

Now I accept that not all code can be profiled in this way but in the context of the posed question it can and if all you are interested in is some average run times for the purpose of deciding on whether code is efficient enough or not it is not worth the effort to go and setup and install a profiling tool when you can get your answer from adding a couple of lines of code to your application.

BTW: I am really enjoying this discussion ;)

JulianH
0
 
LVL 45

Expert Comment

by:sunnycoder
ID: 12131807
Hi JulianH,

>When you want to determine the performance of an algorthim (lets separate the algorithm from the code) you have to do
>it in such a way that the focus is only on the algorithm and not on other factors that have nothing to do with the efficiency
>of the algorithm itself.
Exactly the point I was trying to make. Since profiling tools have the ability to measure each individual component of time, e.g. time spent by system in process's context, time spent in runnable queue etc., the accuracy of the results increases dramatically.

Since the asker had asked for kind if timing including worst case, I thought it appropriate to suggest to get them as accurate as possible.

> I want to do it in an environment where my app is not going to be timesliced out in favour of another CPU intensive task
Unfortunately, it still will be. Or at the very least, we cannot be sure. Tools help nearly eliminate this uncertainity.

>if all you are interested in is some average run times
I would leave that to asker :)

>BTW: I am really enjoying this discussion ;)
I figured that out when you forgot to add your 2 cents. We would have had 10 cents by now ;-)

just another of my 2 cents

sunnycoder
0
 
LVL 51

Expert Comment

by:Julian Hansen
ID: 12131972
Hi Sunnycoder,

I do not disagree with a single thing you have said - every point you have made is valid. I think this debate is reducing to the merits of internal vs external measuring techniques. This is the way I see it

Internal Techniques

The method described above was a simplifaction - the process can be extended to provide fairly sophisticated output on a very detailed level and customised to monitor exactly those elements of the application you want to measure. This process tells you little about what other processes are doing or how many of them are doing it - all it does tell you is what your application is doing. Other processes may interfer occasionally but these effects can be reduced and nullified statistically by running the test many times and working on average results. Because the interference is essentially random and the execution of the application (in the test environment) is not the effects of random influences can be filtered out.

External Techniques

These can do much of what is described above but they also give detailed information about what other processess are doing i.e. they can look at the system as a whole. They are good for measuring processess that are closed (i.e. to which you cannot add monitoring code). When it comes to monitoring your code they do have certain limitations - for instance - if you want to know how much time is spent in a particular routine or how long a particular internal routine takes to execute they will be limited. There may be ways of measuring these elements but in doing so the performance tool introduces some overhead of its own not to mention the additional complexity in configuring it.

The way I see it there is merit in both approaches depending on your requirements - you use whichever tool suits the requirement you have.

Just a quick note on worst case scenario's - how does one define this. The worst case scenario is never, dead, empty, last etc. By measuring performance of a system over time you can collect information that will tell you what the worst case scenario you have had but this will not necessarily tell you what your worst case scenario will be in the future - a server crashing could be the worst case scenario - has nothing to do with how your app is written but it will effect the apps performance. Worst case scenario in the context of measuring the performance of an application should be restricted to what it is you can control or what it is you know about the application - outside of that no tool is going to outperform a good old guess because the external effects are, or can be. linked to random unpredictable events.

Here's 4c to make up for my shortfall last time ;)

JulianH
0
 
LVL 51

Expert Comment

by:Julian Hansen
ID: 12132055
Just another note on worst case scenario for the above code.

The remove code is looping through a list.
Worst case scenario will be if the item searched for is at the end of the list
Average case is it is somewhere in the middle
and best case it is at the front.

The actual times will be roughly a linear function of the number of elements in the list. O(n)

The removeall function is looping through the list and for each item it is looking for it in the list again (I assume to find the previous element so that the selected element can be properly removed). This is also O(n) so combined the entire process is an O(n^2) algorithm so again worst case scenario will be a function of the number of elements in the list.

0

Featured Post

Maximize Your Threat Intelligence Reporting

Reporting is one of the most important and least talked about aspects of a world-class threat intelligence program. Here’s how to do it right.

Join & Write a Comment

Suggested Solutions

Title # Comments Views Activity
Is Cross Site Request Forgery (CSRF) & XSS applicable to RPG coding? 10 119
wordsWithout 49 79
noX challenge 17 76
topping2 challenge 13 58
Go is an acronym of golang, is a programming language developed Google in 2007. Go is a new language that is mostly in the C family, with significant input from Pascal/Modula/Oberon family. Hence Go arisen as low-level language with fast compilation…
This is an explanation of a simple data model to help parse a JSON feed
Viewers will learn how to properly install Eclipse with the necessary JDK, and will take a look at an introductory Java program. Download Eclipse installation zip file: Extract files from zip file: Download and install JDK 8: Open Eclipse and …
In this fifth video of the Xpdf series, we discuss and demonstrate the PDFdetach utility, which is able to list and, more importantly, extract attachments that are embedded in PDF files. It does this via a command line interface, making it suitable …

708 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

13 Experts available now in Live!

Get 1:1 Help Now