
Over the past several years, leading tech firms have poured billions of dollars into a simple concept: As artificial intelligence systems are given access to more data and computing power, their performance tends to improve.
But new research from MIT FutureTech suggests that the “bigger is better” approach to AI development may be reaching the point of diminishing returns. Previous research has shown that as AI models grow, the performance gains from additional computing power start to fade, based on neural scaling laws. In their modeling, MIT researchers Hans Gundlach, Jayson Lynch, and found that the decrease in performance gains is significant enough that companies will eventually see little comparative advantage from scaling their models much faster than other organizations.
Read more...