Merge Sort
Introduction
We have see the brute force approach already. How can we speed things up? Think of the outer loop from iterating through the array to only the height of the tree.
Question (LC.464)
Given an integer array, sort it in ascending order. Use quick sort, merge sort, heap sort or any O(nlogn) algorithm.
Example
I: [3, 2, 1, 4, 5]
O:
start: 0 end: 4 slide: [3, 2, 1, 4, 5]
start: 0 end: 2 slide: [3, 2, 1]
start: 0 end: 1 slide: [3, 2]
start: 0 end: 0 slide: [3]
start: 1 end: 1 slide: [2]
merged: [2, 3]
start: 2 end: 2 slide: [1]
merged: [1, 2, 3]
start: 3 end: 4 slide: [4, 5]
start: 3 end: 3 slide: [4]
start: 4 end: 4 slide: [5]
merged: [4, 5]
merged: [1, 2, 3, 4, 5]Analysis
Why this thing is not in-place? The merge process has to be implemented with an extra array.
the base case - only one element we just return it
divide each array into two subarrays (what if the number is odd? does mid+1 handle it?)
we conquer left and right subarray by calling mergeSort on it
we use a merge function to combine the sorted results. merge two sorted arrays remember?
Code w/ temp array
Code w/ new slice
Programs with side effects, such as modifying a shared temp array, cannot be easily parallelized because mutual exclusion has to be involved.
Concurrent Code
In practice, the concurrent code will not run faster by default. There is a cost (overhead to create a goroutine, go scheduler to schedule the goroutine, gc to terminate the goroutine, etc) associated with making the run concurrent. The break even point is around 1000 elements from this blog post.
Conclusion
The recurrence is T(n) = 2T(n/2) + n.


Merge sort is undoubtably fast with its O(nlogn) runtime. But it does require an extra O(n) space for its merging step. Next up, we will introduce another sorting algorithm with average O(nlogn) time and O(1) space. Stay tuned.
References
UMD CS251 Lecture 6 by David Mount
Medium - Parallel Merge Sort in Go
Wikipedia - Timsort
Last updated