Solved

Posted on 2006-05-02

This question is not very difficult but it is urgent and I do need a good explanation for it.

Can anyone tell me how to calculate big O notations of algorithms ?(I am not providing you with one because I dont have one specific in mind), especially while, for or any other iterated loops. Can you please be descriptive because I need to really know this for my final exam and its very urgent. Can you tell me how to calculate it for various cases and show me what algorithm will give O(nlog(n)). Please help

Can anyone tell me how to calculate big O notations of algorithms ?(I am not providing you with one because I dont have one specific in mind), especially while, for or any other iterated loops. Can you please be descriptive because I need to really know this for my final exam and its very urgent. Can you tell me how to calculate it for various cases and show me what algorithm will give O(nlog(n)). Please help

6 Comments

for ( int i = 0 ; i < n ; i ++ ) // a loop which runs n times

for ( int j = 0 ; j < n ; j ++ ) // a loop which runs n times each time the outer loop runs (which is also n times)

Hence the above has an order of n-square.

http://www.ics.uci.edu/~ep

http://www.experts-exchang

http://www.cprogramming.co

there have been many questions on this topic.

if u give a search for "big O" in experts exchange site , then u will loads of answers.

the big O notation is used to measure the time performance of the algorithms say efficiency...... one can decide approximate time required for ur program using big O notaion....

the calculation of big O notation is based on the input u have and how u r processing it. generally the algorithms which have loops or recursions in them the big O notaion is used to get idea about maximum how many times that loop or recursion will be excecuted

lets take an example of searching an int from an array....

say u have n nos of elements

sequential search

for(int i=0;i<n;i++) {

if(array[i] == no2search) {

// return the number and its position in array

}

}

in above code execution of for loop is depend on how many elements u have in ur array i. e. n ... if the number 2 be searched is not in array or at last position then for loop is executed for maximum n times hence the time complexity of above code is O(n)....

now condiser all the elements in the array are properly sorted in ascending order then the loops will be

low = 0;

high = n-1;

while(1) {

if(high < low) {

// display not found then break

}

index = (low+high)/2;

if(array[index] == no2search) {

// display found and break

}

if(array[index] > no2search) {

high = index-1;

}

if(array[index] < no2search) {

low = index+1;

}

}

in above code u start first with whole array get the middle element of array and compre it with number to be searched if equals then display found and break; if not then

u divide array in to two halves left to middle element and right to the middle element....

then if the no2search is less than current element array[index] that means the no2search may present in left subarray (b'caz all elements in right subarray are greater than it) hence we keep lower bound as it is and modifiy higher bound of array as index-1

if(no2search > array[index]) then we descard left subarray and search in right subarray..

this procedureis get repeated until we found the number or we are left with subarray if length 0 (i.e. no elements in the subarray)

that mean in each iteration we divide the array in two equal halves and discard one and continue with other..

that means we maximum iteration required are log n (log to the base 2)

hence complexity of the algorithms is O(log n)...

similarly follow the algorithms for quick sort which is O(n * log n)

Quick Sort

--------------------------

Algorithm Analysis

The quick sort is an in-place, divide-and-conquer, massively recursive sort. As a normal person would say, it's essentially a faster in-place version of the merge sort. The quick sort algorithm is simple in theory, but very difficult to put into code (computer scientists tied themselves into knots for years trying to write a practical implementation of the algorithm, and it still has that effect on university students).

The recursive algorithm consists of four steps (which closely resemble the merge sort):

If there are one or less elements in the array to be sorted, return immediately.

Pick an element in the array to serve as a "pivot" point. (Usually the left-most element in the array is used.)

Split the array into two parts - one with elements larger than the pivot and the other with elements smaller than the pivot.

Recursively repeat the algorithm for both halves of the original array.

The efficiency of the algorithm is majorly impacted by which element is choosen as the pivot point. The worst-case efficiency of the quick sort, O(n2), occurs when the list is sorted and the left-most element is chosen. Randomly choosing a pivot point rather than using the left-most element is recommended if the data to be sorted isn't random. As long as the pivot point is chosen randomly, the quick sort has an algorithmic complexity of O(n log n).

Pros: Extremely fast.

Cons: Very complex algorithm, massively recursive.

Empirical Analysis

Quick Sort Efficiency

The quick sort is by far the fastest of the common sorting algorithms. It's possible to write a special-purpose sorting algorithm that can beat the quick sort for some data sets, but for general-case sorting there isn't anything faster.

As soon as students figure this out, their immediate implulse is to use the quick sort for everything - after all, faster is better, right? It's important to resist this urge - the quick sort isn't always the best choice. As mentioned earlier, it's massively recursive (which means that for very large sorts, you can run the system out of stack space pretty easily). It's also a complex algorithm - a little too complex to make it practical for a one-time sort of 25 items, for example.

With that said, in most cases the quick sort is the best choice if speed is important (and it almost always is). Use it for repetitive sorting, sorting of medium to large lists, and as a default choice when you're not really sure which sorting algorithm to use. Ironically, the quick sort has horrible efficiency when operating on lists that are mostly sorted in either forward or reverse order - avoid it in those situations.

Source Code

Below is the basic quick sort algorithm.

void quickSort(int numbers[], int array_size)

{

q_sort(numbers, 0, array_size - 1);

}

void q_sort(int numbers[], int left, int right)

{

int pivot, l_hold, r_hold;

l_hold = left;

r_hold = right;

pivot = numbers[left];

while (left < right)

{

while ((numbers[right] >= pivot) && (left < right))

right--;

if (left != right)

{

numbers[left] = numbers[right];

left++;

}

while ((numbers[left] <= pivot) && (left < right))

left++;

if (left != right)

{

numbers[right] = numbers[left];

right--;

}

}

numbers[left] = pivot;

pivot = left;

left = l_hold;

right = r_hold;

if (left < pivot)

q_sort(numbers, left, pivot-1);

if (right > pivot)

q_sort(numbers, pivot+1, right);

}

By clicking you are agreeing to Experts Exchange's Terms of Use.

Title | # Comments | Views | Activity |
---|---|---|---|

Receive file in Servlet | 1 | 25 | |

powerN challenge | 3 | 31 | |

return in catch statement | 1 | 32 | |

countPairs challenge | 7 | 34 |

Join the community of 500,000 technology professionals and ask your questions.

Connect with top rated Experts

**12** Experts available now in Live!