User Tools

Site Tools


arguments_for_ai_risk:quantitative_estimates_of_ai_risk

This is an old revision of the document!


Quantitative Estimates of AI Risk

This page is in an early draft. It is very incomplete and may contain errors.

Some people who are working in AI Safety have published quantitative estimates for how likely they think it is that AI will pose an existential threat.

Background

Many thinkers believe advanced artificial intelligence (AI) poses a large threat to humanity's long term survival or flourishing. Here we review their quantitative estimates.

For quotes from specific prominent people working on AI, see this page.

This page draws heavily from this database made by Michael Aird at Convergence Analysis.

Quantitative Estimates

Individuals

Sort a HTML Table Alphabetically

Estimator Date What is Estimated? p(Doom) Source
Toby Ord 2020 Existential catastrophe by 2120 as a result of unaligned AI 0.1 The Precipice
Joe Carlsmith 2021 Existential catastrophe by 2070 from advanced, planning, strategic AI 0.05 Is Power-Seeking AI an Existential Risk?
Katja Grace 2023 Bad future because AI agents with bad goals control cognitive labor 0.19 Will AI end everything? A guide to guessing
Nate Soares 2021 Existential catastrophe by 2070 from advanced, planning, strategic AI 0.77 Comments on Carlsmith's "Is power-seeking AI an existential risk?"
Eliezer Yudkowsky 2022 AGI "killing literally everyone" ~1 AGI Ruin: A List of Lethalities

Surveys

arguments_for_ai_risk/quantitative_estimates_of_ai_risk.1686782242.txt.gz · Last modified: 2023/06/14 22:37 by jeffreyheninger