I have an issue with the following code :
class quicksort {
private:
void _sort(double_it begin, double_it end)
{
if ( begin == end ) { return ; }
double_it it = partition(begin, end, bind2nd(less<double>(), *begin)) ;
iter_swap(begin, it-1);
_sort(begin, it-1);
_sort(it, end);
}
public:
quicksort (){}
void operator()(vector<double> & data)
{
double_it begin = data.begin();
double_it end = data.end() ;
_sort(begin, end);
}
};
However, this won’t work for too large a number of elements (it works with 10 000 elements, but not with 100 000).
Example code :
int main()
{
vector<double>v ;
for(int i = n-1; i >= 0 ; --i)
v.push_back(rand());
quicksort f;
f(v);
return 0;
}
Doesn’t the STL partition function works for such sizes ? Or am I missing something ?
Many thanks for your help.
I see a couple of problems. I wouldn’t include the pivot in your partitioning so I would use this line instead:
Also, I wouldn’t continue to include the pivot in your future sorts so I would do
instead, but you need to be careful that
it - 2isn’t less thanbeginso check thatit - 1 != beginfirst. There is no need to continually sort the pivot – it is already in the correct spot. This will just add a lot of extra needless recursion.You can certainly still have stack overflow problems with this code even after the changes. For instance, if you sort an already sorted array with this code, the performance will be O(N^2) and if N is very large then you will get a stack overflow. Using a randomly chosen pivot will essentially eliminate that for the sorted array problem, but you can still have problems if the array is all the same element. In that case, you need to alter your algorithm to use Bentley-McIlroy partitioning or the like. Or you could change it to an introsort and change to heapsort when the recursion depth gets very deep.