On the Origin of Algorithmic Progress in AI

Hans Gundlach, Alex Fogelson, Jayson Lynch, Ana Trisovic, Jonathan Rosenfeld, Anmol Sandhu, Neil Thompson

July 26, 2025
Algorithms have been estimated to increase AI training FLOP efficiency by a factor of 22,000 between 2012 and 2023 (ho2024algorithmic). Running small-scale ablation experiments on key innovations from this time period, we are able to account for less than 10× of these gains. Surveying the broader literature, we estimate that additional innovations not included in our ablations account for less than 10×, yielding a total under 100×. This leads us to conduct scaling experiments, which reveal that much of this efficiency gap can be explained by algorithms with scale-dependent efficiency improvements. In particular, we conduct scaling experiments between LSTMs and Transformers, finding exponent differences in their compute-optimal scaling law while finding little scaling difference for many other innovations. These experiments demonstrate that – contrary to standard assumptions – an algorithm’s efficiency gains are tied to compute scale. Using experimental extrapolation and literature estimates, we account for 6,930× efficiency gains over the same time period, with the scale-dependent LSTM-to-Transformer transition accounting for the majority of gains. Our results indicate that algorithmic progress for small models has been far slower than previously assumed, and that measures of algorithmic efficiency are strongly reference-dependent.