ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai [2023/03/09 04:43]
katjagrace [Impacts of HLMI]
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai [2023/10/12 00:57] (current)
katjagrace [2022 Expert Survey on Progress in AI]
Line 1: Line 1:
 ====== 2022 Expert Survey on Progress in AI ====== ====== 2022 Expert Survey on Progress in AI ======
  
-// Published 04 August, 2022; last updated 06 August, 2022 //+/
  
  
 +10/11/2023 Harlan: 
 +(You can now make comments on wiki using the syntax with the stars and slashes that you'll see below and above this comment. Readers will not be able to see this text.)
 +This wiki page is going to be hyperlinked in the survey invitation for the 2023 survey invites, so please be extra careful not to break it in the next few weeks!
 +
 +
 +*/
 +// Published 04 August, 2022; last updated 26 May, 2023 //
  
-//This page is in progress. It includes results that are preliminary and have a higher than usual chance of inaccuracy and suboptimal formatting. It is missing many results.// 
  
  
Line 72: Line 78:
 </HTML> </HTML>
  
 +==== Data cleaning ====
 +
 +Edits were made to the raw data before analysis in the hope of preserving its intended meaning, including but not limited to:
 +  * Text such as '%' was removed from numerical answers
 +  * Some answers that were logically impossible in a way that implied misunderstanding of the question were ignored (e.g. when asked for the probability of an event happening by ascending numbers of years, the numbers should not be descending)
 +  * Some answers appeared designed to add to 100% in cases where that suggested a misunderstanding of the question. If a person sometimes had logically impossible answers which suggested misunderstanding the question, and sometimes had answers which were possible but also added to 100% suggesting the continuation of the same misunderstanding, we also excluded those answers.
  
 ===== Definitions ===== ===== Definitions =====
Line 110: Line 122:
  
  
-==== High-level machine intelligence timelines ====+==== High-level machine intelligence (HLMI) timelines ====
  
 +===Basic HLMI timelines===
  
 <HTML> <HTML>
Line 142: Line 155:
 </HTML> </HTML>
  
 +===HLMI timelines via automation of labor===
 +
 +Participants were either asked about the probability of an occupation being fully automated by a given year or asked about the year at which a given probability would obtain. We have not yet used the methodology in the above section to aggregate the results from the two questions into a single prediction. Below are the results for the fixed-year version of the question.
 +
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:automationlikelihood.png|}}
  
 ==== Impacts of HLMI ==== ==== Impacts of HLMI ====
Line 211: Line 229:
 </ul> </ul>
 </HTML> </HTML>
 +
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:howbad-exploded.png?|}}
  
 **Full distribution of responses** **Full distribution of responses**
Line 369: Line 389:
 </HTML> </HTML>
  
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:intelligenceexplosionhbar.png|}}
 +
 +==== Causes of AI progress ====
 +
 +Participants were asked about the sensitivity of progress in AI capabilities to various changes in
 +inputs.
 +
 +== 50% less research effort ==
 +
 +"Imagine that over the past decade, only half as much researcher effort had gone into
 +AI research. For instance, if there were actually 1,000 researchers, imagine that there
 +had been only 500 researchers (of the same quality).
 +How much less progress in AI capabilities would you expect to have seen?"
 +
 +The median response was 25%.
 +
 +== Less cheap hardware ==
 +
 +"Over the last n years the cost of computing hardware has fallen by a factor of 20.
 +Imagine instead that the cost of computing hardware had fallen by only a factor of 5
 +over that time (around half as far on a log scale).
 +How much less progress in AI capabilities would you expect to have seen?"
 +
 +The median response was 60%.
 +
 +== 50% less work on training sets ==
 +
 +"Imagine that over the past decade, there had only been half as much effort put into
 +increasing the size and availability of training datasets. For instance, perhaps there
 +are only half as many datasets, or perhaps existing datasets are substantially smaller or
 +lower quality.
 +How much less progress in AI capabilities would you expect to have seen?"
 +
 +The median response was 50%.
 +
 +== 50% less funding ==
 +
 +"Imagine that over the past decade, AI research had half as much funding (in both
 +academic and industry labs). For instance, if the average lab had a budget of \$20 million
 +each year, suppose their budget had only been \$10 million each year.
 +How much less progress in AI capabilities would you expect to have seen?"
 +
 +The median response was 35%.
 +
 +== 50% less algorithmic progress ==
 +
 +"Imagine that over the past decade, there had been half as much progress in AI
 +algorithms. You might imagine this as conceptual insights being half as frequent.
 +How much less progress in AI capabilities would you expect to have seen?"
 +
 +The median response was 50%.
 +
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:aiprogress.png|}}
  
 ==== Existential risk ==== ==== Existential risk ====
Line 493: Line 566:
 </HTML> </HTML>
  
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:how_much_should_society_prioritize_ai_safety_research_relative_to_how_much_it_is_currently_prioritized_1_.png?600|}}
  
 === Stuart Russell’s problem === === Stuart Russell’s problem ===
Line 628: Line 702:
 </HTML> </HTML>
  
 +===== Notable citations of 2022 Expert Survey on Progress in AI =====
 +
 +Places where the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 ESPAI]] has been cited:
 +
 +  - Holt, Lester. 2023. "[[https://youtu.be/qRLrE2tkr2Y|AI ‘race to recklessness’ could have dire consequences, tech experts warn in new interview]]" //NBC Nightly News//
 +  - Feldman, Noah. 2023. "[[https://www.bloomberg.com/opinion/articles/2023-04-02/regulating-ai-might-require-a-new-federal-agency|Regulating AI Will Be Essential. And Complicated.]]" //Bloomberg//
 +  - Wallace-Wells, David. 2023. "[[https://www.nytimes.com/2023/03/27/opinion/ai-chatgpt-chatbots.html|A.I. Is Being Built by People Who Think It Might Destroy Us]]" //The New York Times//
 +  - Roser, Max. 2023. "[[https://ourworldindata.org/ai-timelines|AI timelines: What do experts in artificial intelligence expect for the future?]]" //Our World in Data//
 +  - Klein, Ezra. 2023. "[[https://www.nytimes.com/2023/03/21/podcasts/ezra-klein-podcast-transcript-kelsey-piper.html|Transcript: Ezra Klein Interviews Kelsey Piper]]" //The New York Times//
 +  - Klein, Ezra. 2023. "[[https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html|This Changes Everything]]" //The New York Times//
 +  - Samuel, Sigal. 2023. "[[https://www.vox.com/the-highlight/23621198/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology|The case for slowing down AI"]] //Vox//
 +  - Lopez, German. 2023. "[[https://www.nytimes.com/2023/04/21/briefing/ai-chatgpt.html|Using A.I. in Everyday Life]]" //The New York Times//
 +  - 2023. "[[https://www.economist.com/science-and-technology/2023/04/19/how-generative-models-could-go-wrong|How generative models could go wrong]]" //The Economist//
 +  - Harari, Yuval, et al. 2023. "[[https://www.nytimes.com/2023/03/24/opinion/yuval-harari-ai-chatgpt.html|You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills]]" //The New York Times//
 +  - Hammond, Samuel. 2023. "[[https://www.politico.com/news/magazine/2023/05/08/manhattan-project-for-ai-safety-00095779|We Need a Manhattan Project for AI Safety]]" //Politico//
 +  - Nay, John. 2022. "[[https://law.stanford.edu/2022/09/25/aligning-ai-with-humans-by-leveraging-law-as-data/|Aligning AI with Humans by Leveraging Law as Data]]" //Center for Legal Informatics, Stanford University//
  
 ===== Contributions ===== ===== Contributions =====
Line 633: Line 723:
  
 <HTML> <HTML>
-<p>The survey was run by Katja Grace and Ben Weinstein-Raun. Data analysis was done by Zach Stein-Perlman and Ben Weinstein-Raun. This page was written by Zach Stein-Perlman and Katja Grace.</p>+<p>The survey was run by Katja Grace and Ben Weinstein-Raun. Data analysis was done by Zach Stein-PerlmanBen Weinstein-Raun and John Salvatier. This page was written by Zach Stein-Perlman and Katja Grace.</p>
 </HTML> </HTML>
  
Line 904: Line 994:
  
 <HTML> <HTML>
-<p>Zach Stein-Perlman, Benjamin Weinstein-Raun, Katja Grace, “2022 Expert Survey on Progress in AI.” <em>AI Impacts</em>, 3 Aug. 2022. <a href="/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai">https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/</a>.</p>+<p>Katja Grace, Zach Stein-Perlman, Benjamin Weinstein-Raun, and John Salvatier, “2022 Expert Survey on Progress in AI.” <em>AI Impacts</em>, 3 Aug. 2022. <a href="/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai">https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/</a>.</p>
 </HTML> </HTML>
  
ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2022_expert_survey_on_progress_in_ai.1678337001.txt.gz · Last modified: 2023/03/09 04:43 by katjagrace