insertion sort and bubble sort and merge sort and quick sortalgorithm


How is insertion sort algorithm different from bubble sort, quick sort, merge sort algorithm. which one to use where. what are advantages and disadvantages of each.

why we need inner for loop for all these sort algorithm and also temp variable.

also for loop declaration before inserting and displaying why the increment used as ++i rather that i++ as below

   int[] content= new int[size];
      for(int i = 0; i < size; ++i) {

please advise
Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Insertion sort is usually more efficient in practice than bubble sort,
but has has O(n^2) average case performance while quicksort and merge sort have O(n log(n)) average case performance.

Insertion sort can be appropriate to use when the number of items to be sorted is small, as in the last steps of a quicksort implementation.
Bubble sort may be appropriate to use when the length of the code must be very small, or when sorting on a device on which adjacent swaps are faster then random access, such as tape drives.
Quicksort can be appropriate when average case performance is important and when an in-place sort is desired.
Merge sort can be appropriate when a worst case performance guarantee is important and when a stable sort is desired.
Tomas Helgi JohannssonCommented:

Take a look at these links

and see them in action

      Tomas Helgi
Inner loops and temp variables are features of particular implementations, so it would be necessary to know the implementation to say why they were used.

++i and i++ have equivalent effects when the value of the operation is unused, as in the increment clause of a for loop,
however, if a return value is actually generated and not optimized out when not used, then not saving the value prior to the increment could save some effort.
Exploring SharePoint 2016

Explore SharePoint 2016, the web-based, collaborative platform that integrates with Microsoft Office to provide intranets, secure document management, and collaboration so you can develop your online and offline capabilities.

Couple of extra comments on this:

Bubble sort is often overlooked as a sorting algorithm because it's O(n^2) on average.  But (when written correctly) it has an important property - which is that when given an already sorted, or almost sorted, list it is O(n).  So if you need to keep a list sorted it can be a good choice.

To understand why ++i is potentially faster than i++ it helps to think through the steps:

int i = 1 ;
int a = ++i ;     // 'a' gets value 2
int b = i++ ;    // 'b' also gets value 2

Expanding the code for ++i it would look like:

int preincrement(int var) {
    var = var + 1 ;
    return var ;

Expanding the code for i++ it would look like:

int postincrement(int var) {
    int originalValue = var ;
    var = var + 1 ;
   return originalValue ;

So it's the extra step in the logic for postincrement (storing the original value) which makes it potentially slower.
However, while this is a good thing to know it's almost NEVER going to make any measurable difference in the performance of your code.

Mostly using ++i over i++ just says to other developers:
   "I'm an old C developer and when I grew up computers were a lot slower than they are today" :)


Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Bubble sort ... when given an already sorted, or almost sorted, list it is O(n).  
True, but that's not how insertion sort algorithm is different from bubble sort.
"I'm an old C developer and when I grew up computers were a lot slower than they are today"
Or, a developer used to more recent languages in which i could be a large complex object that takes a lot more work to copy than just an int.
gudii9Author Commented:
let me understand more on these posts.

By the way i like the graphics here

I do see some wiki page has similar graphic but not forgot. please advise
Tomas Helgi JohannssonCommented:

The links I provided in my earlier comment gives you the pros / cons of each algorithm as well as
the code and working live example.

    Tomas Helgi
That graphical site is really cool - helps you grasp how those algorithms are really working, which can be hard to understand if you just stare at the code.  Nice find.

gudii9Author Commented:

Nearly Sorted                                                
Few Unique                                                
These pages show 8 different sorting algorithms on 4 different initial conditions. These visualizations are intended to:

Show how each algorithm operates.
Show that there is no best sorting algorithm.
Show the advantages and disadvantages of each algorithm.
Show that worse-case asymptotic behavior is not always the deciding factor in choosing an algorithm.
Show that the initial condition (input order and key distribution) affects performance as much as the algorithm choice.
The ideal sorting algorithm would have the following properties:

Stable: Equal keys aren't reordered.
Operates in place, requiring O(1) extra space.
Worst-case O(n·lg(n)) key comparisons.
Worst-case O(n) swaps.
Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.

Sorting is a vast topic; this site explores the topic of in-memory generic algorithms for arrays. External sorting, radix sorting, string sorting, and linked list sorting—all wonderful and interesting topics—are deliberately omitted to limit the scope of discussion.

i am trying to understand this in detail. I was not clear which one is good and what are advantages and disadvantages of each.

What are those 4 initial conditions. I see pictures are moving but what to interpret from the graphics there. please advise
Random: no correlation between initial position and final position
Nearly Sorted: correlation between initial position and final position is close to 1
Reversed: correlation between initial position and final position is -1
Few Unique: most of the entries have duplicate values
gudii9Author Commented:
Reversed: correlation between initial position and final position is -1

what it mean by correclation and what it means by -1

please advise
in this context, it means in the opposite order, as in low to high vs high to low.
gudii9Author Commented:
Nearly Sorted: correlation between initial position and final position is close to 1

what it means by above statement. please advise. what is 1
"nearly sorted" means the list is already close to sorted at the start.

E.g. "A,B,C,E,D,F,H,G" is nearly sorted.

Correlation is a measure of how similar two things are and goes from -1 (completely different) to +1 (completely the same).

So a nearly sorted list is very similar (has a correlation close to 1) to the final sorted list.

Make sense?

It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Java EE

From novice to tech pro — start learning today.