Deep Learning's Diminishing Returns: The Cost of Improvement is Becoming Unsustainable

Deep Learning's Diminishing Returns: The Cost of Improvement is Becoming Unsustainable

October 1, 2021

Deep learning is now being used to translate between languages, predict how proteins fold, analyze medical scans, and play games as complex as Go, to name just a few applications of a technique that is now becoming pervasive. Success in those and other realms has brought this machine-learning technique from obscurity in the early 2000s to dominance today.

Although deep learning's rise to fame is relatively recent, its origins are not. In 1958, back when mainframe computers filled rooms and ran on vacuum tubes, knowledge of the interconnections between neurons in the brain-inspired Frank Rosenblatt at Cornell to design the first artificial neural network, which he presciently described as a “pattern-recognizing device.” But Rosenblatt's ambitions outpaced the capabilities of his era—and he knew it. Even his inaugural paper was forced to acknowledge the voracious appetite of neural networks for computational power, bemoaning that “as the number of connections in the network increases…the burden on a conventional digital computer soon becomes excessive.”

Fortunately for such artificial neural networks—later rechristened “deep learning” when they included extra layers of neurons—decades of Moore's Law and other improvements in computer hardware yielded a roughly 10-million-fold increase in the number of computations that a computer could do in a second. So when researchers returned to deep learning in the late 2000s, they wielded tools equal to the challenge.



See the full argumentation of Neil C. Thompson, Kristjan Greenewald, Keeheon Lee and Gabriel F. Manso.