[2 days left] What’s wrong with your cloud strategy? Learn why multicloud solutions matter with Nimble Storage.Register Now

x
Solved

complexity of algo

Posted on 2003-10-31
Medium Priority
613 Views
I got some questions about complexity of shell sort and heap sort
I know worse case, average case of heap sort id O(n log n)
um...i'm just woundering whether the best case is also nlogn, and what if he complexity of heap sort for a sorted file..( i think those two stillO(nlogn), cos bulild the heap: O(n) + sort the heap still(log n) no matter the file is sorted or not)
but I'm not sure....

why the best case comlexity of shell sort is O(nlogn) not O(n^1.25)?

0
Question by:redsar
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

• Help others & share knowledge
• Earn cash & points
• 4
• 3

LVL 11

Expert Comment

ID: 9663452
What is the best case of a sort? This is typically not a relevant question because the best case typically can only happen for an already sorted input and, as n goes up, the probablilty of that input happening is smaller and smaller (arbitrarily close to 0).

For heapsort: if the file is reverse sorted (reversed from the order you want to end up with) then the array already contains an appropriate heap. Thus insertion into the heap is O(n). The extract operation (extract_min or extract_max, depending on order of heap) is stil going to promot the last element to the root of the heap and push it back down to where it belongs. Thus it is an O(lg n) operation performed for each extraction or O(n lg n) for all of them. O(n) + O(n lg n) = O(n lg n).

But wait, is this the very best case? What if the array is monotonic (contains only one value)? Then the heap is already in place as before (O(n) to realize this) _and_ promoting the farthest leaf to the root does not break the heap property so there is nothing to push down in the heap; extract becomes  O(1) and is done n times: O(n) for all of the extraction operations: O(n) + O(n) = O(n). The best case is linear in array size. It requires a monotonic array to sort so it is unlikely to be of any real use but this is the best case.

I don't have my algorithm text handy so I can't perform the same analysis for shell sort but if I were to guess where the "best" performance would be I would try a monotonic array.

Hope this helps, -bcl
0

Author Comment

ID: 9665084

thank you very much.
what about the best case comlexity of shell sort is O(nlogn) not O(n^1.25)?

i know it is based on the sequence, any sequence gives O(nlogn)?

and I'm just woundering the space complexity of quick sort , is that O(1)( in place), O(longN) or O(N)?
why we need the stack ?
0

LVL 11

Expert Comment

ID: 9665650
This sounds a lot like homework. It is the policy of EE not to do homework. I can offer some guidance.

So, you know that any sequence requires n lg n time to process with shell short. That means a monotonic array, a sorted array, a reverse sorted array or a completely random array will take that amount of time. If you believe that any sequence takes that amount of time then you are done showing that the best time is O(n lg n). Why do you believe that any sequence takes that long to sort? Explain it to me using the algorithm and I can help you with any speciific problems you have.

As to space complexity of quicksort and the use of a stack: Quicksort sorts the array in place (that is there is no additional array allocation required as there is in mergesort, for example). That is not the whole answer, though. Bubblesort uses O(1) additional space (we are talking about space BEYOND the original array) because you only need a couple of loop control variables and a temporary storage location for the swap; in particular, notice that bubblesort doesn't need recursion or an explicit stack, just some variables. Now think about what gets stored on quicksort's stack and why there needs to be a stack and not just a variable or two.

-bcl
0

Author Comment

ID: 9666242
i guess it is a misunderstanding because of my english.

the question about shell sort is....
cos I think the best case time complexity should be O(n^7/6) by Sedgewick,

i'm just woundering why this is O(NlogN)

I have read through some pages on Sedgewick...
the subfiles in stored in the stack , when we need that, pop them off, and partioning and then push them back to the stack.so the average space complexity is o(logn) and the worst case is
O(N) when the file is sorted...Is that right?
0

LVL 11

Expert Comment

ID: 9666349
Do you have to push copies of portions of the array on the stack for quicksort? Or can you get by with just the endpoints of a range in the array? If you push the endpoints of a range you need a constant amount of space each time you push on the array. Thus the total amount of space used is proportional to the maximum depth of the stack. What is that maximum depth in the best case? What is it in the worst case? The average case?

Well, the worst case is simple: If we partition off one item at a time then there will be n levels pushed on the stack at its deepest. Thus the storage, worst case, is O(n). More interesting is the average case. How deep does the stack get?

Note: Shellsort is not one I tend to teach so my knowledge of analysis of it is limited. I was just reading http://homepages.cwi.nl/~paulv/papers/shellsort.pdf which works on lower bounds and goes beyond Knuth's work but I don't really have a good description of the proof (I can follow the paper but would not have gone there myself).

-bcl
0

Author Comment

ID: 9666482
Thanks a lot....i think the main problem is I don't know how to calculate the space complexity and time complexity....of course, some questions are very easy, like insertion sort and seletion sort, but for the difficult analysis like quick sort and merge sort...i don't know how to deal with..
anyway thanks a lot..
0

LVL 11

Accepted Solution

ID: 9666615
The first step to calculating complexity is figuring out what is executed/allocated most often. Then count how many times that happens.

For quicksort: The key is the recursion. Assuming a perfect pivot (and a power of 2 size of the array), the amount of time it takes to sort a given size array, T(n), is:
T(n) = T(n/2) + T(n/2) + n

It took n operations to partition the array and it will take however long it takes to sort half of the array TWICE (each half) to finish the sort.
We also know that an array of size 1 is already sorted. This gives you a non-homogeneous recurrance equation. You solve the equation (by converting to a homogeneous equation) and you find that this "perfect pivot" sort has a closed form that is O(n lg n).

Alternatively, think about how many recursive calls will be needed for sorting a size N array. Assuming perfect partition, we need a depth of lg n recursive calls (n, n/2, n/4, n/8, n/16, ..., 2, 1; there are lg n numbers in this sequence and the sequence represents the size of the section of the array being sorted at each recursive call).  If you think about ALL of the recursive calls, the calling diagram is a full binary tree (rooted at the call with size n which has two children, the call to sort the left half of the array (size n/2) and the call to sort the right half of the array (size n/2) and so on.

Does that function call diagram make sense? Total complexity is the total of work done in every node. Since the work is proportional to the size of the section of the array being sorted, each level in the full binary tree contributes O(n) work (each level processes the whole array, one piece per node). Thus the total work is proportional to the height of the full binary tree times n. But there are lg n levels in this binary tree so the total complexity of the sort is O(n lg n).

Hope this helps. Getting a feel for complexity is a challenge (learning how to find and solve recurrance equations can be very helpful if you plan on doing a lot of this).

-bcl
0

Featured Post

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Whether you’re a college noob or a soon-to-be pro, these tips are sure to help you in your journey to becoming a programming ninja and stand out from the crowd.
Although it can be difficult to imagine, someday your child will have a career of his or her own. He or she will likely start a family, buy a home and start having their own children. So, while being a kid is still extremely important, it’s also …
An introduction to basic programming syntax in Java by creating a simple program. Viewers can follow the tutorial as they create their first class in Java. Definitions and explanations about each element are given to help prepare viewers for future …
Introduction to Processes
Suggested Courses
Course of the Month14 days, 9 hours left to enroll