ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai [2024/01/04 04:40]
harlanstewart
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai [2024/01/30 01:17] (current)
harlanstewart
Line 1: Line 1:
 ====== 2023 Expert Survey on Progress in AI ====== ====== 2023 Expert Survey on Progress in AI ======
  
-// Published 17 August, 2023. Last updated January, 2024. // +// Published 17 August, 2023. Last updated 29 January, 2024. //
- +
-//This page is a work in progress and will be updated soon.//+
  
 The 2023 Expert Survey on Progress in AI is a survey of 2,778 AI researchers that AI Impacts ran in October 2023. The 2023 Expert Survey on Progress in AI is a survey of 2,778 AI researchers that AI Impacts ran in October 2023.
Line 13: Line 11:
 The 2023 Expert Survey on Progress in AI (2023 ESPAI) is a rerun of the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 ESPAI]] and the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai|2016 ESPAI]], previous surveys ran by AI Impacts in collaboration with others. Almost all of the questions in the 2023 ESPAI are identical to those in both the 2022 ESPAI and 2016 ESPAI. The 2023 Expert Survey on Progress in AI (2023 ESPAI) is a rerun of the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 ESPAI]] and the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai|2016 ESPAI]], previous surveys ran by AI Impacts in collaboration with others. Almost all of the questions in the 2023 ESPAI are identical to those in both the 2022 ESPAI and 2016 ESPAI.
  
-A preprint about the 2023 ESPAI is available.+A preprint about the 2023 ESPAI is available [[https://arxiv.org/abs/2401.02843|here]].
  
 ==== Survey methods ==== ==== Survey methods ====
Line 35: Line 33:
 === Changes from 2016 and 2022 ESPAI surveys === === Changes from 2016 and 2022 ESPAI surveys ===
  
-These are some notable differences from the 2022 [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|Expert Survey on Progress in AI]]+These are some notable differences from the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]]
   * We recruited participants from twice as many conferences as in 2022.   * We recruited participants from twice as many conferences as in 2022.
   * We made some changes to the order and flow of the questions.   * We made some changes to the order and flow of the questions.
Line 51: Line 49:
  
 ==== Results ==== ==== Results ====
 +The full dataset of anonymized responses to the survey is available [[https://docs.google.com/spreadsheets/d/1aOydfhZHuVwU_fwTgE0_O_-8p-uMrRDYV5R5QnwOMGI/edit?usp=sharing|here]].
 +
 ===Timing of human-level performance=== ===Timing of human-level performance===
 We asked about the timing of human-level performance by asking some participants about how soon they expect "high-level machine intelligence" (HLMI) and asking others about how soon they expect "full automation of labor" (FAOL). As in previous surveys, participants who were asked about FAOL tended to give significantly longer timelines than those asked about HLMI. We asked about the timing of human-level performance by asking some participants about how soon they expect "high-level machine intelligence" (HLMI) and asking others about how soon they expect "full automation of labor" (FAOL). As in previous surveys, participants who were asked about FAOL tended to give significantly longer timelines than those asked about HLMI.
Line 60: Line 60:
  
 We aggregated the 1714 responses to this question by fitting each response to a gamma CDF and finding the mean curve of those CDFs. The resulting aggregate forecast gives a 50% chance of HLMI by 2047, down thirteen years from 2060 in the 2022 ESPAI. We aggregated the 1714 responses to this question by fitting each response to a gamma CDF and finding the mean curve of those CDFs. The resulting aggregate forecast gives a 50% chance of HLMI by 2047, down thirteen years from 2060 in the 2022 ESPAI.
 +
 +[{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:01958ae7-cb8c-4340-a2e7-5b54c0ab5e7c_2076x1594.jpg?600|The aggregate forecast in 2023 predicted that HLMI would arrive earlier than the aggregate forecast predicted in the 2022 survey.}}]
  
 ==Full automation of labor (FAOL)== ==Full automation of labor (FAOL)==
Line 68: Line 70:
  
 The 774 responses to this question were used to create an aggregate forecast which gives a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 ESPAI. The 774 responses to this question were used to create an aggregate forecast which gives a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 ESPAI.
 +
 +[{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:31afa88c-324e-4b6d-91ed-97e2715e5b95_2076x1594.png?600|The aggregate forecast in 2023 predicted that FAOL would arrive earlier than the aggregate forecast predicted in the 2022 survey.}}]
  
 ===Intelligence explosion=== ===Intelligence explosion===
Line 113: Line 117:
 ===How concerning are 11 future AI-related scenarios?=== ===How concerning are 11 future AI-related scenarios?===
 1345 participants rated their level of concern for 11 AI-related scenarios over the next thirty years. As measured by the percentage of respondents who thought a scenario constituted either a “substantial” or “extreme” concern, the scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%). 1345 participants rated their level of concern for 11 AI-related scenarios over the next thirty years. As measured by the percentage of respondents who thought a scenario constituted either a “substantial” or “extreme” concern, the scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%).
 +
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:82d27755-5476-4af5-ac32-a29fe0be5d41_1314x773.png|}}
  
 ===Overall impact of HLMI=== ===Overall impact of HLMI===
Line 125: Line 131:
 | "On balance bad" | 15%     | 18%  | | "On balance bad" | 15%     | 18%  |
 | "Extremely bad (e.g. human extinction)" | 5%     | 9%  | | "Extremely bad (e.g. human extinction)" | 5%     | 9%  |
 +
 +[{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:ce8dfb8d-83ec-453b-8dd2-3609565338b7_2544x1274.png|A random selection of 800 responses on the positivity or negativity of long-run impacts of HLMI on humanity. Each vertical bar represents one participant.}}]
  
 ===Preferred rate of progress=== ===Preferred rate of progress===
Line 136: Line 144:
   * 22.8% said "Somewhat faster"   * 22.8% said "Somewhat faster"
   * 15.6% said "Much faster"   * 15.6% said "Much faster"
 +
 +===How soon will 39 tasks be feasible for AI?===
 +Participants were asked about when each of 39 tasks would become "feasible" for AI, i.e. when “one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they
 +would choose to.” Each respondent was asked about four of the tasks, so each task received around 250 estimates. 
 +
 +As with the questions about timelines to human performance, participants were asked for three year-probability pairs with either the fixed-years or fixed-probabilities framing. Out of the 32 tasks which were also in the 2022 survey, for most of them the aggregate forecasts in this survey predicted them to be reached earlier than was predicted in the 2022 survey.
 +
 +[{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:21f4fd7b-b6b7-4bf7-bb2d-f33901e2ffdc_2310x3603.png?600|The aggregate forecasts in 2023 predicted typically predicted that milestones would arrive earlier than was predicted in 2022. The year when the aggregate distribution gives a milestone a
 +50% chance of being met is represented by solid circles, open circles, and solid squares for tasks, occupations, and
 +general human-level performance respectively.}}]
  
 ===The alignment problem=== ===The alignment problem===
Line 193: Line 211:
   * 36% said "More"   * 36% said "More"
   * 34% said "Much more"   * 34% said "Much more"
 +
 +[{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:a8e6a2c4-a549-4501-a242-9b3d2d3eddfa_2114x1280.jpg?600|Responses from the 2016, 2022, and 2023 surveys to "How much should society prioritize AI safety research, relative to how much it is currently prioritized?"}}]
  
 ===Human extinction=== ===Human extinction===
Line 201: Line 221:
 | "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?" | 10% | 19.4% | | "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?" | 10% | 19.4% |
 |"What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?"| 5% | 14.4% | |"What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?"| 5% | 14.4% |
 +
 +{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:59b60985-66a9-4364-976b-5c6228eff6fa_2544x1274.png?600|}}
 +
 +====Frequently asked questions====
 +
 +===How does the seniority of the participants affect the results?===
 +
 +One reasonable concern about an expert survey might be that more senior experts are busier and less likely to participate in a survey. We found that authors with over 1000 citations were 69% as likely to participate in the survey as the base rate among those we contacted. We found that differences in seniority had little effect on opinions about the likelihood that HLMI would lead to impacts that were "extremely bad (e.g. human extinction)." (see table below)
 +
 +^Group^% who gave at least 5% odds to "extremely bad (e.g. human extinction)" impacts from HLMI^
 +|All participants|57.80%|
 +|Has 100+ citations|62.30%|
 +|Has 1000+ citations|59.00%|
 +|Has 10,000+ citations|56.30%|
 +|Started PhD by 2018|58.80%|
 +|Started PhD by 2013|58.50%|
 +|Started PhD by 2003|54.70%|
 +|In current field 5+ years|54.40%|
 +|In current field 10+ years|51.40%|
 +|In current field 20+ years|48.00%|
  
 ====Contributions==== ====Contributions====
-The authors of the 2023 Expert Survey on Progress in AI are Katja Grace, Julia Fabienne Sandkühler, Harlan Stewart, Stephen Thomas, Ben Weinstein-Raun, and Jan Brauner.+Authors of the 2023 Expert Survey on Progress in AI are Katja Grace, Julia Fabienne Sandkühler, Harlan Stewart, Stephen Thomas, Ben Weinstein-Raun, and Jan Brauner.
  
 Many thanks for help with this research to Rebecca Ward-Diorio, Jeffrey Heninger, John Salvatier, Nate Silver, Joseph Carlsmith, Justis Mills, Will Macaskill, Zach Stein-Perlman, Shakeel Hashim, Mike Levine, Lucius Caviola, Eli Rose, Max Tegmark, Jaan Tallinn, Shahar Avin, Daniel Filan, David Krueger, Nathan Young, Michelle Hutchinson, Arden Koehler, Nuño Sempere, Naomi Saphra, Soren Mindermann, Dan Hendrycks, Alex Tamkin, Vael Gates, Yonadav Shavit, James Aung, Jacob Hilton, Ryan Greenblatt, and Frederic Arnold. Many thanks for help with this research to Rebecca Ward-Diorio, Jeffrey Heninger, John Salvatier, Nate Silver, Joseph Carlsmith, Justis Mills, Will Macaskill, Zach Stein-Perlman, Shakeel Hashim, Mike Levine, Lucius Caviola, Eli Rose, Max Tegmark, Jaan Tallinn, Shahar Avin, Daniel Filan, David Krueger, Nathan Young, Michelle Hutchinson, Arden Koehler, Nuño Sempere, Naomi Saphra, Soren Mindermann, Dan Hendrycks, Alex Tamkin, Vael Gates, Yonadav Shavit, James Aung, Jacob Hilton, Ryan Greenblatt, and Frederic Arnold.
ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai.1704343209.txt.gz · Last modified: 2024/01/04 04:40 by harlanstewart