# Algorithm issue

What does       O(n2) algorithm mean? and How to test it to show running time?
###### Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Senior Software DeveloperCommented:
What you're looking at there is called "Big O Notation" and it's used to refer to algorithm performance. There are a number of variants but the basic ones are as follows:

O(1) is an algorithm that will always execute in the same amount of time regardless of the input.

O(n) is an algorithm that will take longer as larger input sets are provided. The amount of time grows linearly and directly in proportion to the size of the input data.

O(n2) is an algorithm that takes longer in proportion to the square of the size of the input. This is your case. These algorithms work well for small sets of data but take dramatically longer as the input data grows. You typically run into O(n2) algorithms when using nested loops.

O(2n) is an algorithm that takes twice as long for each incremental size increase of the input data. These guys grow really quickly and can often be written better.

There are many more but this should give you an idea (as well as answer your immediate question).

Experts Exchange Solution brought to you by

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Senior Software DeveloperCommented:
To answer the 2nd part of your question, you can easily add time information to any C# routine like this.

``````    DateTime startTime = DateTime.Now;
TimeSpan totalTime = DateTime.Now - startTime;
``````
totalTime now contains the total elapsed time of the code between it and when startTime was declared.
Author Commented:
How would I measure the different sorting structures runining time  if say I apply O(n2) or O(1) in these activities
Quick Sort
Heap Sort
Merge Sort
Insertion Sort
Bubble Sort
Selection Sort
Binary Search
Linear Search
Inserting at the beginning of a linked list
Inserting at the beginning of an array
Finding the largest value in an unsorted list or array
Finding the median value in a sorted array
Senior Software DeveloperCommented:
You'll need to create input data sets of varying sizes and then use the timer code above to figure out how long each run takes. You'll probably need a sufficiently large input data set to actually have meaning. For example, an input set of size 4 on an O(n2) algorithm will take 4 times as long to execute as the same algorithm with an input size of 2. The issue is that even 4 times longer won't really be measurable for such a small sample size. You need input data sets in the size of hundreds, possibly thousands, to get meaningful measurements.

Once you have your times recorded, it's just a matter of tabulating the results and calculating the time increase as the data set grows.
Author Commented:
When you say input size of 4 or 2 , does that mean 4 items and 2 items. How about to run it against a fix number of items like 1000.
It I choose to run only on one criteria, O(n2) O to raise power of 2 and put all the run time, what would be the formula?
Senior Software DeveloperCommented:
If you only run your algorithms with a single data set you'll learn which of your algorithms is fastest for that particular data set but you won't know the growth factor of each which is required to determine its big O notation. You cannot calculate big O notation with a single data set.
IT Business Systems Analyst / Software DeveloperCommented:
And just to clarify something else, "Big-O" notation is not intended to compare the different run times of different algorithms for a certain size input data. It is used to describe how the run time changes for different input sizes using the same algorithm.

For example, say you have two algorithms that both do the same thing but in different ways (it could be for sorting or something else), one algorithm is described as O(n), which generally sounds like it is a good one, and the other is O(n^3) which is generally considered bad. Now assume that you have an input size of 100 items, which algorithm is faster?

You can't answer that question just from what is given. The first could take 5 minutes, while the second one takes 1 minute. That is entirely possible, meaning the you can't compare the run time of the two algorithms based just on their O complexity. What you can tell though is if you scaled the input data up to 1000 items, i.e. a ten-fold increase, then you could expect (it isn't a guarantee though) the first one to take about 10 times longer, i.e. 50 minutes, but the second one would take 10^3 or 1000 times longer, i.e. 1000 minutes. And this is why the second is considered "bad".

But yeah, it is hard not to, but don't fall into the trap of thinking you can compare run times of different algorithms just based on their O complexity.
Senior Software DeveloperCommented:
mccarl makes a very valid point here. However there are some cases when you can use big O notation to help you decide on the best algorithm for a given task. In your example above you're comparing several search algorithms. The big O notation of each can help you determine which is best for a given set of data. Usually, however, these things only matter for sufficiently large sets of data. With a small data set the sort time will be negligible in most cases.

It seems you're asking two questions.
1. What is the correct big O notation for each algorithm?
2. Which is the best algorithm to use given a certain set of data?

Those two questions are quite independent of one another.
\Commented:
Just to clarify some points.
O(2n) = 2*O(n) = O(n); i.e., as n doubles, so does the number of operations double (for some n > N)
O(n2) is probably O(n^2); (i.e., n-squared); i.e., as n doubles, the number of operations quadruples (for some n > N)

"for some n > N" means this..

You might think that O(n^2) implies that if there are 200K operations for n = 10,000, then doubling n to be 20,000, would result in a quadrupling of the number of operations; i.e., 800K operations. But this is only true if 10,000 is larger than N, where N is less than 10,000. If N is, for example, 1,000,000, then only if n > 1,000,000, is this quadrupling effect guaranteed to occur when n doubles. If n < 1,000,000, then the Big Oh notation is not saying anything about growth. The number N varies from algorithm to algorithm.

This means that an O(n^3) algorithm, A1, could possibly have less operations than an O(n^2) algorithm, A2, depending upon their respective values of N. But the O(n^3) algorithm will at some point as n --> infinity, be guaranteed to have more operations than an O(n^2) algorithm.

One other point...

A larger number of operations does not necessarily translate to running time, so again, as others have stated for possibly different reasons, comparing big-Oh for different algorithms doesn't always translate into running time expectations. The computer architecture is a major factor that can result in a worse big-Oh algorithm running faster than a better one for all n. (Well, for all n that will fit easily into the RAM.)
Senior Software DeveloperCommented:
Where big O notation is most useful is when evaluating different approaches to solving the same problem. (that and answering questions on tests). The example in your original question is a good one. Given a specific set of data that needs to be sorted, which sort algorithm is most efficient? You can calculate the big O value for worst case, best case, and average case for each algorithm then decide which is the best overall.

Here's a bit more detail on this. Generally, a bubble sort is considered the least efficient while a heap sort is the most efficient. Here's how the Big O notation breaks down for both:

Bubble Sort - Best Case = O(n)
Bubble Sort - Worst Case = O(n2) (that's n squared but I'm limited on characters)
Bubble Sort - Average Case = O(n2)

Heap Sort - Best Case = O(n log n)
Heap Sort - Worst Case = O(n log n)
Heap Sort - Average Case O(n log n)

So, while a bubble sort actually has a better best case value than a heap sort, its worst case and average case are far worse. What's more, you can determine from this that a heap sort becomes far more efficient for sufficiently large values of n whereas a bubble sort actually gets worse.

So, for all but the smallest of sets, heap sorts are preferred for their performance characteristics.
Author Commented:
Thank  you all for answering to me is a complication! You make it look easy
###### It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Programming

From novice to tech pro — start learning today.