- cross-posted to:
- hackernews@derp.foo
- softwareengineering@group.lt
- cross-posted to:
- hackernews@derp.foo
- softwareengineering@group.lt
AlphaDev uncovered new sorting algorithms that led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.
That is 70% faster on sequences of five elements. Using this for handling the end of the recursion in LLVMs current std::sort implementation results in a 1.7% speedup on sequences exceeding 250k elements.
I wonder if their paper has a plot of the speedups against number of elements. Did they just stop measuring at 250k? What was the shape of the curve?
I didn’t bother reading, not so interested in AI stuff. But since they highlight these results, it likely does not get better for higher numbers.
I’ll bet it just ends up being the limitations of memory bandwidth to stuff things into registers for the optimized algorithm. Or, something like Mojo’s autotuning finds the best way to partition the work for the hardware.