User Tools

Site Tools


uncategorized:ai_risk_surveys

This is an old revision of the document!


AI Risk Surveys

Published 9 May 2023

We know of four surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.

Details

AI experts

This section is in progress.

Not currently included on this list

  • The informal Alexander Kruel interviews from 2011–2012.
  • Ezra Karger and Philip Tetlock et al.'s “Hybrid Forecasting-Persuasion Tournament” (2022, results to be released around 1 June 2023). “The median AI expert gave a 3.9% chance to an existential catastrophe (where fewer than 5,000 humans survive) owing to AI by 2100” (The Economist). We will know more when the report is out. We are tentatively concerned about population quality and sampling bias. In particular, Zach Stein-Perlman was invited to participate as an AI expert in May 2022; he was not an AI expert.

AI safety/governance experts

  • "Existential risk from AI" survey results (Bensinger 2021, informal)
    • Existential risk (substantive clarifying notes were included with the questions; see link)
      • “How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of humanity not doing enough technical AI safety research?”
        • Median 20%; mean 30%
      • “How likely do you think it is that the overall value of the future will be drastically less than it could have been, as a result of AI systems not doing/optimizing what the people deploying them wanted/intended?”
        • Median 30%; mean 40%
    • Population: “people working on long-term AI risk”
      • “I sent the survey out to two groups directly: MIRI's research team, and people who recently left OpenAI (mostly people suggested by Beth Barnes of OpenAI). I sent it to five other groups through org representatives (who I asked to send it to everyone at the org 'who researches long-term AI topics, or who has done a lot of past work on such topics'): OpenAI, the Future of Humanity Institute (FHI), DeepMind, the Center for Human-Compatible AI (CHAI), and Open Philanthropy.”
    • The survey was sent to ~117 people and received 44 responses.
  • Survey on AI existential risk scenarios (Clarke et al. 2020, published 2021)
    • “The survey aimed to identify which AI existential risk scenarios . . . researchers find most likely,” not to estimate the probability of risks.
    • Population: “prominent AI safety and governance researchers”
      • “We sent the survey to 135 researchers at leading AI safety/governance research organisations (including AI Impacts, CHAI, CLR, CSER, CSET, FHI, FLI, GCRI, MILA, MIRI, Open Philanthropy and PAI) and a number of independent researchers. We received 75 responses, a response rate of 56%.”

Other

For public surveys, see Surveys of public opinion on AI.

Author: Zach Stein-Perlman

uncategorized/ai_risk_surveys.1683675763.txt.gz · Last modified: 2023/05/09 23:42 by zachsteinperlman