0

This is a question to people who are programmers for a living -
I just proved (using the Master theorem) that if we use quicksort and we pick the pivot to be the median of the the subarray we are partitioning (using the median of medians algorithm with Θ(n) worst case run time) then the worst case run time of quicksort is Θ(n lg n) - so basically this means that this version of quicksort is as good as it can get.

My question now is - does anyone implement quicksort like this in practice? Or is it just one of those nice theoretical things that are actually not good in real life?

PS - I don't need proofs of what I'm stating, I just want to know if this is being widely known/useful

4

3 回答 3

4

This is known (see the wikipedia entry), but since in practice the worst case is relatively rare, the added overhead of an O(N) selection algorithm on the average case is generally considered unacceptable.

于 2013-05-05T14:42:04.230 回答
1

当您围绕某个枢轴进行分区时,您已经拥有了枢轴的“质量”(它如何均匀地划分数组)。如果它低于某个阈值,您可以尝试一些更聪明的方法来选择枢轴。这可以保持时间复杂度 O(n*log n) 并保持较低的常数,因为很少进行复杂的选择。

如果我没有弄错 C++ STL 使用类似的东西,但我没有任何链接 - 那是来自工作对话。

更新

C++ STL(至少是 Visual Studio 中的那个)似乎做了不同的事情:

  1. 执行分区
  2. 通过递归无条件地对较小的部分进行排序(因为它不能大于对 O(n*log n) 安全的一半)
  3. 在同一个循环中处理较大的部分(没有递归调用)

如果迭代次数超过大约。1.5 log2(N),它切换到 O(n*log n) 的堆排序。

于 2013-05-05T18:15:24.477 回答
1

这真的取决于你在哪里工作。到目前为止,就我个人而言,我从未真正实施过它——但我真的认为它会有所不同,具体取决于您工作场所的要求。

于 2013-05-05T14:38:06.263 回答