Bubble sort ... when given an already sorted, or almost sorted, list it is O(n).True, but that's not how insertion sort algorithm is different from bubble sort.
"I'm an old C developer and when I grew up computers were a lot slower than they are today"Or, a developer used to more recent languages in which i could be a large complex object that takes a lot more work to copy than just an int.
These pages show 8 different sorting algorithms on 4 different initial conditions. These visualizations are intended to:
Show how each algorithm operates.
Show that there is no best sorting algorithm.
Show the advantages and disadvantages of each algorithm.
Show that worse-case asymptotic behavior is not always the deciding factor in choosing an algorithm.
Show that the initial condition (input order and key distribution) affects performance as much as the algorithm choice.
The ideal sorting algorithm would have the following properties:
Stable: Equal keys aren't reordered.
Operates in place, requiring O(1) extra space.
Worst-case O(n·lg(n)) key comparisons.
Worst-case O(n) swaps.
Adaptive: Speeds up to O(n) when data is nearly sorted or when there are few unique keys.
There is no algorithm that has all of these properties, and so the choice of sorting algorithm depends on the application.
Sorting is a vast topic; this site explores the topic of in-memory generic algorithms for arrays. External sorting, radix sorting, string sorting, and linked list sorting—all wonderful and interesting topics—are deliberately omitted to limit the scope of discussion.
Reversed: correlation between initial position and final position is -1
Nearly Sorted: correlation between initial position and final position is close to 1
++i and i++ have equivalent effects when the value of the operation is unused, as in the increment clause of a for loop,
however, if a return value is actually generated and not optimized out when not used, then not saving the value prior to the increment could save some effort.