Surveys of experts on levels of AI Risk
Published 9 May 2023; last updated 23 May 2023
This page is being updated, and may be low quality.
We know of six surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.
Surveys of AI experts
2016 Expert Survey on Progress in AI
(Main article: 2016 Expert Survey on Progress in AI)
Paper: When Will AI Exceed Human Performance? Evidence from AI Experts (Grace et al. 2016, published 2018)
“Say we have 'high-level machine intelligence' when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:”
“Extremely good (e.g. rapid growth in human flourishing)”: median 20%
“On balance good”: median 25%
“More or less neutral”: median 20%
“On balance bad”: median 10%
“Extremely bad (e.g. human extinction)”: median 5%
Zhang et al 2019
2022 Expert Survey on Progress in AI
(Main article: 2022 Expert Survey on Progress in AI)
Extinction
“What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?
“What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
Long-run impact of high-level machine intelligence
“Say we have 'high-level machine intelligence' when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:”
“Extremely good (e.g. rapid growth in human flourishing)”: median 10%; mean 24%
“On balance good”: median 20%; mean 26%
“More or less neutral”: median 15%; mean 18%
“On balance bad”: median 10%; mean 17%
“Extremely bad (e.g. human extinction)”: median 5%; mean 14%
“Stuart Russell's argument”
AI safety research
Population: authors of papers at ICML or NeurIPS 2021
The survey was sent to “approximately 4271” people and received 738 responses.
Michael et al 2022
Generation Lab 2023
Expert Survey on Progress in AI 2023
Not currently included on this list
-
Ezra Karger and Philip Tetlock et al.'s “Hybrid Forecasting-Persuasion Tournament” (2022, results to be released around 1 June 2023). “The median AI expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100” (
The Economist). We will know more when the report is out. We are tentatively concerned about population quality and sampling bias. In particular, Zach Stein-Perlman was invited to participate as an AI expert in May 2022; he was not an AI expert.
Surveys of AI safety/governance experts
-
Existential risk (substantive clarifying notes were included with the questions; see link)
“How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?”
“How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?”
Population: “people working on long-term AI risk”
“I sent the survey out to two groups directly: MIRI's research team, and people who recently left OpenAI (mostly people suggested by Beth Barnes of OpenAI). I sent it to five other groups through org representatives (who I asked to send it to everyone at the org 'who researches long-term AI topics, or who has done a lot of past work on such topics'): OpenAI, the Future of Humanity Institute (FHI), DeepMind, the Center for Human-Compatible AI (CHAI), and Open Philanthropy.”
The survey was sent to ~117 people and received 44 responses.
-
“The survey aimed to identify which AI existential risk scenarios . . . researchers find most likely,” not to estimate the probability of risks.
Population: “prominent AI safety and governance researchers”
“We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts, CHAI, CLR, CSER, CSET, FHI, FLI, GCRI, MILA, MIRI, Open Philanthropy and PAI) and a number of independent researchers. We received 75 responses, a response rate of 56%.”
Not currently included on this list
Other