We are an interdisciplinary group that studies the foundations of progress in computing: what are the most important trends, how do they underpin economic prosperity, and how can we harness them to sustain and promote productivity growth.

Latest news & insights

Featured Research

June 2025
Expertise
David Autor & Neil Thompson

When job tasks are automated, does this augment or diminish the value of labor in the tasks that remain? We argue the answer depends on whether removing tasks raises or reduces the expertise required for remaining non-automated tasks. Since the same task may be relatively expert in one occupation and inexpert in another, automation can simultaneously replace experts in some occupations while augmenting expertise in others. We propose a conceptual model of occupational task bundling that predicts that changing occupational expertise requirements have countervailing wage and employment effects: automation that decreases expertise requirements reduces wages but permits the entry of less expert workers; automation that raises requirements raises wages but reduces the set of qualified workers. We develop a novel, content-agnostic method for measuring job task expertise, and we use it to quantify changes in occupational expertise demands over four decades attributable to job task removal and addition. We document that automation has raised wages and reduced employment in occupations where it eliminated inexpert tasks, but lowered wages and increased employment in occupations where it eliminated expert tasks. These effects are distinct from—and in the case of employment, opposite to—the effects of changing task quantities. The expertise framework resolves the puzzle of why routine task automation has lowered employment but often raised wages in routine task-intensive occupations. It provides a general tool for analyzing how task automation and new task creation reshape the scarcity value of human expertise within and across occupations.

August 2024
The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence
Peter Slattery, Alexander K. Saeri, Emily A. C. Grundy, Jess Graham, Michael Noetel, Risto Uuk, James Dao, Soroush Pour, Stephen Casper, Neil Thompson

The risks posed by Artificial Intelligence (AI) are of considerable concern to academics, auditors, policymakers, AI companies, and the public. However, a lack of shared understanding of AI risks can impede our ability to comprehensively discuss, research, and react to them. This paper addresses this gap by creating an AI Risk Repository to serve as a common frame of reference. This comprises a living database of 777 risks extracted from 43 taxonomies, which can be filtered based on two overarching taxonomies and easily accessed, modified, and updated via our website and online spreadsheets. We construct our Repository with a systematic review of taxonomies and other structured classifications of AI risk followed by an expert consultation. We develop our taxonomies of AI risk using a best-fit framework synthesis. Our high-level Causal Taxonomy of AI Risks classifies each risk by its causal factors (1) Entity: Human, AI; (2) Intentionality: Intentional, Unintentional; and (3) Timing: Pre-deployment; Post-deployment. Our mid-level Domain Taxonomy of AI Risks classifies risks into seven AI risk domains: (1) Discrimination & toxicity, (2) Privacy & security, (3) Misinformation, (4) Malicious actors & misuse, (5) Human-computer interaction, (6) Socioeconomic & environmental, and (7) AI system safety, failures, & limitations. These are further divided into 23 subdomains. The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. This creates a foundation for a more coordinated, coherent, and complete approach to defining, auditing, and managing the risks posed by AI systems.