
A new study from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) puts that question to a rigorous empirical test. Matthias Mertens, a research scientist at MIT's FutureTech research project, Neil Thompson, director of FutureTech at CSAIL, and co-author Natalia Fischl-Lanzoni analyzed training and benchmark data for 809 large language models released between 2022 and 2025. Using scaling-law regressions with developer and time fixed effects, along with a Shapley decomposition to attribute performance variation, they disentangled the contributions of raw compute, shared algorithmic progress across the field, and company-specific advantages. Their findings paint a nuanced picture: at the frontier, 80 to 90 percent of performance is explained by scale alone, but away from it, proprietary techniques and algorithmic advances matter considerably more. Here, Thompson describes what they found, what it means for competition in the AI industry, and why the answer depends on where you look.