This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai [2024/01/05 16:37] harlanstewart |
ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai [2024/01/30 01:17] (current) harlanstewart |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== 2023 Expert Survey on Progress in AI ====== | ====== 2023 Expert Survey on Progress in AI ====== | ||
- | // Published 17 August, 2023. Last updated | + | // Published 17 August, 2023. Last updated |
- | + | ||
- | //This page is a work in progress and will be updated soon.// | + | |
The 2023 Expert Survey on Progress in AI is a survey of 2,778 AI researchers that AI Impacts ran in October 2023. | The 2023 Expert Survey on Progress in AI is a survey of 2,778 AI researchers that AI Impacts ran in October 2023. | ||
Line 13: | Line 11: | ||
The 2023 Expert Survey on Progress in AI (2023 ESPAI) is a rerun of the [[ai_timelines: | The 2023 Expert Survey on Progress in AI (2023 ESPAI) is a rerun of the [[ai_timelines: | ||
- | A preprint about the 2023 ESPAI is available [[https://aiimpacts.org/wp-content/uploads/ | + | A preprint about the 2023 ESPAI is available [[https://arxiv.org/abs/2401.02843|here]]. |
==== Survey methods ==== | ==== Survey methods ==== | ||
Line 35: | Line 33: | ||
=== Changes from 2016 and 2022 ESPAI surveys === | === Changes from 2016 and 2022 ESPAI surveys === | ||
- | These are some notable differences from the 2022 [[ai_timelines: | + | These are some notable differences from the [[ai_timelines: |
* We recruited participants from twice as many conferences as in 2022. | * We recruited participants from twice as many conferences as in 2022. | ||
* We made some changes to the order and flow of the questions. | * We made some changes to the order and flow of the questions. | ||
Line 62: | Line 60: | ||
We aggregated the 1714 responses to this question by fitting each response to a gamma CDF and finding the mean curve of those CDFs. The resulting aggregate forecast gives a 50% chance of HLMI by 2047, down thirteen years from 2060 in the 2022 ESPAI. | We aggregated the 1714 responses to this question by fitting each response to a gamma CDF and finding the mean curve of those CDFs. The resulting aggregate forecast gives a 50% chance of HLMI by 2047, down thirteen years from 2060 in the 2022 ESPAI. | ||
+ | |||
+ | [{{: | ||
==Full automation of labor (FAOL)== | ==Full automation of labor (FAOL)== | ||
Line 70: | Line 70: | ||
The 774 responses to this question were used to create an aggregate forecast which gives a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 ESPAI. | The 774 responses to this question were used to create an aggregate forecast which gives a 50% chance of FAOL by 2116, down 48 years from 2164 in the 2022 ESPAI. | ||
+ | |||
+ | [{{: | ||
===Intelligence explosion=== | ===Intelligence explosion=== | ||
Line 115: | Line 117: | ||
===How concerning are 11 future AI-related scenarios? | ===How concerning are 11 future AI-related scenarios? | ||
1345 participants rated their level of concern for 11 AI-related scenarios over the next thirty years. As measured by the percentage of respondents who thought a scenario constituted either a “substantial” or “extreme” concern, the scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%). | 1345 participants rated their level of concern for 11 AI-related scenarios over the next thirty years. As measured by the percentage of respondents who thought a scenario constituted either a “substantial” or “extreme” concern, the scenarios worthy of most concern were: spread of false information e.g. deepfakes (86%), manipulation of large-scale public opinion trends (79%), AI letting dangerous groups make powerful tools (e.g. engineered viruses) (73%), authoritarian rulers using AI to control their populations (73%), and AI systems worsening economic inequality by disproportionately benefiting certain individuals (71%). | ||
+ | |||
+ | {{: | ||
===Overall impact of HLMI=== | ===Overall impact of HLMI=== | ||
Line 127: | Line 131: | ||
| "On balance bad" | 15% | 18% | | | "On balance bad" | 15% | 18% | | ||
| " | | " | ||
+ | |||
+ | [{{: | ||
===Preferred rate of progress=== | ===Preferred rate of progress=== | ||
Line 138: | Line 144: | ||
* 22.8% said " | * 22.8% said " | ||
* 15.6% said "Much faster" | * 15.6% said "Much faster" | ||
+ | |||
+ | ===How soon will 39 tasks be feasible for AI?=== | ||
+ | Participants were asked about when each of 39 tasks would become " | ||
+ | would choose to.” Each respondent was asked about four of the tasks, so each task received around 250 estimates. | ||
+ | |||
+ | As with the questions about timelines to human performance, | ||
+ | |||
+ | [{{: | ||
+ | 50% chance of being met is represented by solid circles, open circles, and solid squares for tasks, occupations, | ||
+ | general human-level performance respectively.}}] | ||
===The alignment problem=== | ===The alignment problem=== | ||
Line 195: | Line 211: | ||
* 36% said " | * 36% said " | ||
* 34% said "Much more" | * 34% said "Much more" | ||
+ | |||
+ | [{{: | ||
===Human extinction=== | ===Human extinction=== | ||
Line 203: | Line 221: | ||
| "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?" | | "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?" | ||
|"What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?" | |"What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species within the next 100 years?" | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ====Frequently asked questions==== | ||
+ | |||
+ | ===How does the seniority of the participants affect the results?=== | ||
+ | |||
+ | One reasonable concern about an expert survey might be that more senior experts are busier and less likely to participate in a survey. We found that authors with over 1000 citations were 69% as likely to participate in the survey as the base rate among those we contacted. We found that differences in seniority had little effect on opinions about the likelihood that HLMI would lead to impacts that were " | ||
+ | |||
+ | ^Group^% who gave at least 5% odds to " | ||
+ | |All participants|57.80%| | ||
+ | |Has 100+ citations|62.30%| | ||
+ | |Has 1000+ citations|59.00%| | ||
+ | |Has 10,000+ citations|56.30%| | ||
+ | |Started PhD by 2018|58.80%| | ||
+ | |Started PhD by 2013|58.50%| | ||
+ | |Started PhD by 2003|54.70%| | ||
+ | |In current field 5+ years|54.40%| | ||
+ | |In current field 10+ years|51.40%| | ||
+ | |In current field 20+ years|48.00%| | ||
====Contributions==== | ====Contributions==== |