Why do we analyze the average-case performance of a randomized algorithm and not its worst-case performance?
因为最坏情况很极端才会发生,我们想要的是期望时间.
Because the worst case happen rarely, we want the average case to be fine.
During the running of the procedure RANDOMIZED-QUICKSORT, how many calls are made to the random-number generator RANDOM in the worst case? How about in the best case? Give your answer in terms of Θ-notation.
The best : T(n) = 2T(n/2) + 1 = Θ(n)
The worst : T(n) = T(n-1) + 1 = Θ(n)
Follow @louis1992 on github to help finish this task.