In this FREE six-day email course, you'll learn from Janis Griffin, Database Performance Evangelist. She'll teach 12 steps that you can use to optimize your queries as much as possible and see measurable results in your work. Get started today!

I want to know how the complexity of the Algorithms are calculated. and depending on what values of the complexities the best algorithm is selected , lets take algorithms for sorting or searching .

Thanks

Thanks

2. Pointer(Memory) overheads to implement the solution

are a few

Regards,

Prem.

Thanks

g(n) <= c*f(n) for all but a finite set of n > 0

Cna you explain with some example to explain th terms : 0, log ...

Thanks

ofcourse each visit might take SOME operations but we will round them to one because its a CONSTANT number.

(even if its one line in C, it might be 4-5 in assembly...)

to calculate the Mul-board takes

to fill up a matrix n*n size.

each cell takes CONSTANT (O(1)) number of operation ==> O(n*n)*O(1) = O(n^2).

to explain sorting is a bit more difficult, and need some extend understanding.

Yair

Let me attempt to make it a little bit simpler.

Order O(n) means that the time taken by the algorithm is proportional to n, if you are processing n items through it.

Example: linear search in an unordered array.

Order O(log n) means that the time take is proportional to log n.

Example: Binary search in a sorted array.

Order O(n^2) means proportional to n-square. Example: Bubble Sort.

And so on.

Which algorithm suits a purpose best, depends on the expected number of items to process, n. It sometimes also depends on the sequence in which the input items come in. For example, building an unbalanced (rather, not-necessarily-balanced) binary tree by inorder traversal. Depends on the sequence of input. If the input is in ascending order, the time taken can be very large.

Hope this puts things in a simpler perspective? If you need more clarification or examples, please ask.

- stochastic

can you tell me how the term 'O(log n)' comes for the binary stuff?

Please explain with some example.

Thanks

split the problem to half each iteration.

so if you have 128 items, next iteration you will have 64

then 32, 16, 8, 4 ,2 ,1

so for given n items how many steps are needed ?

for 1 item - 0 (2^0=1)

for 2 items - 1 (2^1=2)

for 4 items - 2 (2^2=4)

for 8 items - 3 (2^3=8)

for n items - log(n) 2^(log(n)=n

It keeps doing this halving, till it finds a match, or concludes that there is no match.

Let me take a simplified example. Imagine that you have a sorted array of numbers 1 to 127 in ascending order (so n=127). Say you are searching for the key value 13. The steps of comparison will be as follows:

step 1: Searching 1..127. middle point is 64. 13<64. Take left half.

step 2: searching 1..63. Middle point is 32. 13<32. Take left half.

step 3: searching 1..31. Middle point is 16. 13<16. Take left half.

step 4: searching 1..15. Middle point is 8. 13>8. Take right half.

Step 5: searching 9..15. Middle point is 12. 13>12. Take right half.

Step 6: searching 13..15. Middle point is 14. 13<14. Take left half.

Step 7: searching 13..14. Middle point is 13 (integer division). Found.

So you see, we took 7 steps to do this.

Log of 128 to the base 2 is 7.

Now if you try to do this with n=256, you will see that you will need (upto) 8 steps, as Log of 256 to the base 2 is 8.

So in general, in binary search, the worst case number of comparisons you will need will be log of n to the base 2 where n is the number of items in the array.

I hope this example helps.

- stochastic.

At the last I want to ask you how we come to O(NLogN) complexity for the quick sort.

Thanks a lot

Baba..

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.

I assume you already know the algorithm or the logic for quicksort. Here is how the complexity becomes O(N*LogN):

Imagine that M = Log N to the base 2.

On the first pass, there are approximately N comparisons (actually N-1). After this, the input data splits into two sublists each of size N/2 approximately. For each of these sublists, there are approx. N/2 comparisons, resulting into a total of four sublists each of size N/4/ and so on.

After halving the sublists M times, there are N sublists each of size 1. Thus, the total number of comparisons becomes

N + 2*(N/2) + 4*(N/4) + ... + N*(N/N)

or

N + N + ... + N (M terms).

which is N * M which is N * LogN

(because M = Log N to the base 2).

This is how the complexity of quicksort becomes O(NLogN). Which is considerably better than O(N^2) of Bubblesort for large values of N.

Hope this finally gives you the complete answer you needed. (Whew, that was quite a bit of work for 10 points! In future, please consider giving larger points to attract more experts to yield greater variety of answers :-)

cheers

- stochastic