User Tools

Site Tools


arguments_for_ai_risk:quantitative_estimates_of_ai_risk

This is an old revision of the document!


Quantitative Estimates of AI Risk

This page is in an early draft. It is very incomplete and may contain errors.

Some people who are working in AI Safety have published quantitative estimates for how likely they think it is that AI will pose an existential threat.

Background

Many thinkers believe advanced artificial intelligence (AI) poses a large threat to humanity's long term survival or flourishing. Here we review their quantitative estimates.

For quotes from specific prominent people working on AI, see this page.

This page draws heavily from this database made by Michael Aird at Convergence Analysis.

Quantitative Estimates

Individuals

Sort a HTML Table Alphabetically

Estimator Date What is Estimated? p(Doom) Source
Toby Ord 2020 Existential catastrophe by 2120 as a result of unaligned AI 0.1 The Precipice
Joe Carlsmith 2021 Existential catastrophe by 2070 from advanced, planning, strategic AI 0.05 Is Power-Seeking AI an Existential Risk?
Katja Grace 2023 Bad future because AI agents with bad goals control cognitive labor 0.19 Will AI end everything? A guide to guessing
Nate Soares 2021 Existential catastrophe by 2070 from advanced, planning, strategic AI 0.77 Comments on Carlsmith's "Is power-seeking AI an existential risk?"
Eliezer Yudkowsky 2022 AGI "killing literally everyone" ~1 AGI Ruin: A List of Lethalities
Rohin Shah 2019 Things with AI do not go well, without additional intervention by us doing safety research 0.1 Conversation with Rohin Shah
Paul Christiano 2019 How much worse the future is in expectation by virtue of our failure to align AI 0.1 Conversation with Paul Christiano
Adam Gleave 2019 Chance that AI does cause a significant risk of harm, without intervention from AI safety efforts 0.6 - 0.7 Conversation with Adam Gleave
Adam Gleave 2019 Chance that AI does cause a significant risk of harm, with median AI safety efforts 0.3 - 0.4 Conversation with Adam Gleave
Adam Gleave 2019 Chance that AI does cause a significant risk of harm, with best case AI safety efforts 0.1 - 0.2 Conversation with Adam Gleave
Rohin Shah 2020 Probability of AI-induced existential risk 0.05 AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
Buck Shlegeris 2020 Probability of AI-induced existential risk 0.5 AI Alignment Podcast: An Overview of Technical AI Alignment in 2018 and 2019 with Buck Shlegeris and Rohin Shah
James Fodor 2020 Unaligned AI usurps and establishes permanent dominance over humanity 0.0005 Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics
Buck Shlegeris 2023 Likelihood of AI coup 0.25 The current alignment plan, and how we might improve it
Stuart Armstrong 2014 Probability of humanity's non-survival in the context of artificial superintelligence 0.33 - 0.5 The future is going to be wonderful if we don't get whacked
Stuart Armstrong 2020 Whether AGI could threaten humanity's survival or permanently curtail its potential 0.05 - 0.3 Is AI an existential threat? We don't know, and we should work on it

Surveys

arguments_for_ai_risk/quantitative_estimates_of_ai_risk.1686802749.txt.gz · Last modified: 2023/06/15 04:19 by jeffreyheninger