69
Big O notation revisited
Notice how you’re not splitting the array into two halves. Instead, one
of the sub-arrays is always empty. So the call stack is really long. Now
instead, suppose you always picked the middle element as the pivot.
Look at the call stack now.
It’s so short! Because you divide the array in half every time, you don’t
need to make as many recursive calls. You hit the base case sooner, and
the call stack is much shorter.
70
Chapter 4
I
Quicksort
The first example you
saw is the worst-case scenario, and the second
example is the best-case scenario. In the worst case,
the stack size is
O(
n
). In the best case, the stack size is O(log
n
).
Now look at the first level in the stack. You pick one element as the
pivot, and the rest of the elements are divided into sub-arrays. You
touch all eight elements in the array. So this first operation takes O(
n
)
time. You touched all eight elements on this level of the call stack. But
actually, you touch O(
n
) elements on every level of the call stack.
71
Big O notation revisited
Even if you partition
the array differently, you’re still touching O(
n
)
elements every time.
So each level takes O(
n
) time to complete.
In this example, there are O(log
n
) levels (the technical way to say
that is, “The height of the call stack is O(log
n
)”).
And each level takes
O(
n
) time. The entire algorithm will take O(
n
) * O(log
n
) = O(
n
log
n
)
time. This is the best-case scenario.
In the worst case, there are O(
n
) levels, so the algorithm will take
O(
n
) * O(
n
) = O(
n
2
) time.
Well, guess what? I’m here to tell you that
the best case is also the
average case.
If you always choose a random element in the array as the
pivot
, quicksort will complete in O(
n
log
n
) time on average. Quicksort
is one of the fastest
sorting algorithms out there, and it’s a very good
example of D&C.