User Tools

Site Tools


uncategorized:ai_risk_surveys

Surveys of experts on levels of AI Risk

Published 9 May 2023; last updated 23 May 2023

This page is being updated, and may be low quality.

We know of six surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.

Surveys of AI experts

2016 Expert Survey on Progress in AI

(Main article: 2016 Expert Survey on Progress in AI)

Paper: When Will AI Exceed Human Performance? Evidence from AI Experts (Grace et al. 2016, published 2018)

  • “Say we have 'high-level machine intelligence' when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:”
    • “Extremely good (e.g. rapid growth in human flourishing)”: median 20%
    • “On balance good”: median 25%
    • “More or less neutral”: median 20%
    • “On balance bad”: median 10%
    • “Extremely bad (e.g. human extinction)”: median 5%
      • 40% of responses had at least 10% on “extremely bad”
  • “Stuart Russell's argument”
    • Respondents were presented with an excerpt from a piece by Stuart Russell, then asked “Do you think this argument points at an important problem?”
      • 11%: “No, not a real problem”
      • 19%: “No, not an important problem”
      • 31%: “Yes, a moderately important problem”
      • 34%: “Yes, an important problem”
      • 5%: “Yes, among the most important problems in the field”
  • AI safety research
    • Respondents were presented with a definition of “AI safety research,” then asked “How much should society prioritize AI safety research, relative to how much it is currently prioritized?”
      • 5% “much less”; 8% “less”; 38% “about the same as it is now”; 35% “more”; 14% “much more”1)
  • Population: authors of papers at ICML or NeurIPS 2015
    • The survey was sent to 1634 people and received 352 responses.

Zhang et al 2019

    • Long-run impact of high-level machine intelligence
      • “Our 2019 survey respondents appeared optimistic about how advances in AI/ML will impact humanity. They predicted that HLMI will be net positive for humanity, with the expected value between 'on balance good' and neutral. The median AI/ML researcher ascribed a probability of 20% that the long-run impact of HLMI on humanity would be 'extremely good (e.g., rapid growth in human flourishing)', 27% that it would be 'on balance good', 16% that it would be 'more or less neutral', and 10% that it would be 'on balance bad'. The median respondent placed a 2% probability on HLMI being [] 'extremely bad (e.g., human extinction)'.”
    • Population: authors of papers at ICML or NeurIPS 2018
      • The survey was sent to 2652 people and received 524 responses.

2022 Expert Survey on Progress in AI

(Main article: 2022 Expert Survey on Progress in AI)

  • Extinction
    • “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?
      • Median 5%; 44% at least 10%
    • “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
      • Median 10%; 56% at least 10%
  • Long-run impact of high-level machine intelligence
    • “Say we have 'high-level machine intelligence' when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption. Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:”
    • “Extremely good (e.g. rapid growth in human flourishing)”: median 10%; mean 24%
    • “On balance good”: median 20%; mean 26%
    • “More or less neutral”: median 15%; mean 18%
    • “On balance bad”: median 10%; mean 17%
    • “Extremely bad (e.g. human extinction)”: median 5%; mean 14%
      • 48% of responses had at least 10% on “extremely bad”
  • “Stuart Russell's argument”
    • Respondents were presented with an excerpt from a piece by Stuart Russell, then asked “Do you think this argument points at an important problem?”
      • 4%: “No, not a real problem”
      • 14%: “No, not an important problem”
      • 24%: “Yes, a moderately important problem”
      • 37%: “Yes, an important problem”
      • 21%: “Yes, among the most important problems in the field”
  • AI safety research
    • Respondents were presented with a definition of “AI safety research,” then asked “How much should society prioritize AI safety research, relative to how much it is currently prioritized?”
      • 2% “much less”; 9% “less”; 20% “about the same as it is now”; 35% “more”; 33% “much more”
  • Population: authors of papers at ICML or NeurIPS 2021
  • The survey was sent to “approximately 4271” people and received 738 responses.

Michael et al 2022

    • Nuclear-level catastrophe
      • AI decisions could cause nuclear-level catastrophe. It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.”
        • 36% agree; 64% disagree
    • Population: “researchers who publish at computational linguistics conferences.” See pp. 3–4 for details.
      • “​​We compute that 6323 people [published at least two papers at computational linguistics conferences] during the survey period according to publication data in the ACL Anthology, meaning we have survey responses from about 5% of the total.”

Generation Lab 2023

Expert Survey on Progress in AI 2023

Not currently included on this list

  • The informal Alexander Kruel interviews from 2011–2012.
  • Ezra Karger and Philip Tetlock et al.'s “Hybrid Forecasting-Persuasion Tournament” (2022, results to be released around 1 June 2023). “The median AI expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100” (The Economist). We will know more when the report is out. We are tentatively concerned about population quality and sampling bias. In particular, Zach Stein-Perlman was invited to participate as an AI expert in May 2022; he was not an AI expert.

Surveys of AI safety/governance experts

  • "Existential risk from AI" survey results (Bensinger 2021, informal)
    • Existential risk (substantive clarifying notes were included with the questions; see link)
      • “How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?”
        • Median 20%; mean 30%
      • “How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?”
        • Median 30%; mean 40%
    • Population: “people working on long-term AI risk”
      • “I sent the survey out to two groups directly: MIRI's research team, and people who recently left OpenAI (mostly people suggested by Beth Barnes of OpenAI). I sent it to five other groups through org representatives (who I asked to send it to everyone at the org 'who researches long-term AI topics, or who has done a lot of past work on such topics'): OpenAI, the Future of Humanity Institute (FHI), DeepMind, the Center for Human-Compatible AI (CHAI), and Open Philanthropy.”
    • The survey was sent to ~117 people and received 44 responses.
  • Survey on AI existential risk scenarios (Clarke et al. 2020, published 2021)
    • “The survey aimed to identify which AI existential risk scenarios . . . researchers find most likely,” not to estimate the probability of risks.
    • Population: “prominent AI safety and governance researchers”
      • “We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts, CHAI, CLR, CSER, CSET, FHI, FLI, GCRI, MILA, MIRI, Open Philanthropy and PAI) and a number of independent researchers. We received 75 responses, a response rate of 56%.”

Not currently included on this list

Other

For public surveys, see Surveys of public opinion on AI.

Author: Zach Stein-Perlman

1)
Source: AI Impacts. This contradicts JAIR, which says 5% “much less”; 6% “less”; 41% “about the same as it is now”; 35% “more”; 12% “much more”.
uncategorized/ai_risk_surveys.txt · Last modified: 2024/08/12 19:34 by katjagrace