In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
For now I mostly see work like these towards the construction phase. Little by little we automate the whole thing.
Just me doing a literature note:
Authors try to find faster algorithms for sorting sequences of 3-5 elements (as programs call them the most for larger sorts) with the computer’s assembly instead of higher-level C, with possible instructions’ combinations similar to a game of Go 10^700. After the instruction selection for adding to the algorithm, if the test output, given the current selected instructions, is different form the expected output, the model invalidates “the entire algorithm”? This way, ended up with algorithms for the “LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.” Then they translated the assembly to C++ for LLVM libc++.
For now I mostly see work like these towards the construction phase. Little by little we automate the whole thing.
Just me doing a literature note:
Authors try to find faster algorithms for sorting sequences of 3-5 elements (as programs call them the most for larger sorts) with the computer’s assembly instead of higher-level C, with possible instructions’ combinations similar to a game of Go 10^700. After the instruction selection for adding to the algorithm, if the test output, given the current selected instructions, is different form the expected output, the model invalidates “the entire algorithm”? This way, ended up with algorithms for the “LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.” Then they translated the assembly to C++ for LLVM libc++.