Surveys of US public opinion on AI
Published 30 November 2022; last updated 9 January 2024
We are aware of 45 high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI or policy responses. Methodologies, questions, and responses vary.
Details
List of surveys
-
Optimism: 11% AI will do more good than harm, 42% more harm than good.
Economy: 19% AI will help jobs and the economy, 72% will hurt.
Quality of life: 35% AI will help quality of life, 54% will hurt.
Existential threat: 44% worried, 55% not worried.
-
Safety: 41% AI is safe, 37% AI is unsafe.
Economy: 28% AI will help the economy, 36% AI will hurt the economy.
Support: 51% support AI research; 32% oppose AI research.
AI is humanity's greatest existential threat: 50% agree; 31% disagree.
Regulation: there should be national regulations on AI (71% agree, 14% disagree) and there should be international regulations on AI (67% agree, 16% disagree).
-
“AI will fundamentally change the way people work and live in the next 10 years” (76%), and positively (77% of those 76%).
AI has a positive impact on life currently (79%) and in the next 10 years (74%).
Increased use of AI “will eliminate more jobs than it creates” (73%).
Brookings (2018a) (online survey, 1535 responses)
Optimism: 12% very optimistic about AI, 29% somewhat optimistic, 27% not very optimistic.
Worry: 12% very worried about AI, 27% somewhat worried, 34% not very worried.
Jobs: 12% AI will create jobs, 13% no effect on jobs, 38% reduce jobs.
Life: 34% AI will make lives easier, 13% harder.
Threat: 32% AI is a threat to humans, 24% no threat to humans.
Brookings (2018b) (online survey, 2021 responses)
“The survey asked how likely robots are to take over most human activities within the next 30 years. Nineteen percent feel this was very likely, 33 percent believes this is somewhat likely, 23 percent feel it is not very likely, and 25 percent were not sure.”
-
Developing AI: 41% support; 22% oppose (after reading a short explanation).
Surveillance/privacy and cyberattacks are seen as likely to be problematic; autonomous weapons are also seen as important but less likely to be problematic; value alignment and critical AI systems failure are not seen as priorities.
There is more trust to develop AI in university researchers and the US military than in the rest of the US government and technology companies (especially Meta).
Automation and AI will create more jobs than they will eliminate: 27% agree; 49% disagree.
“The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.”
Developing high-level machine intelligence: 31% support; 27% oppose.
Outcomes of high-level machine intelligence: 5% extremely good, 21% on balance good, 21% neutral, 22% on balance bad, 12% extremely bad (including human extinction), 18% don't know.
-
“Americans support AI overall, but think there will be unintended consequences” (see section 8).
“Majority of Americans think there are both risks and benefits of AI” (see sections 9-10).
“Americans are unsure about the likelihood that AI will benefit certain aspects of society” “… but Americans are more likely to think that AI will have negative impacts on themselves and society” (see section 13).
“Americans distrust Facebook, the White House, and Congress to keep society's best interest in mind during the development of AI, but trust university and industry scientists the most” (see section 15).
Among various “potential uses of AI,” respondents are most concerned about misinformation and automation (see section 16).
-
“Nearly half of adults (48%) believe the perceived positives of greater AI adoption in everyday life outweigh the perceived negatives, while 29% believe the opposite.”
Respondents are most concerned about privacy (74%), irresponsible use (72%), and jobs (71%), and least concerned about gender bias (39%) and racial bias (47%); additionally, many are concerned about AI “becoming uncontrollable” (67%).
Misuse and loss of privacy are seen as more likely risks, but majorities also believe that “AI will control too much of everyday life” (63%), “AI will become smarter than humans” (52%), and “humans won’t be able to control AI” (51%).
Respondents are more skeptical of “Increased economic prosperity” as a positive outcome of AI than the other eleven positive outcomes listed.
-
“The increased use of artificial intelligence computer programs in daily life makes them feel”: 18% excited, 37% concerned, 45% equally excited and concerned.
Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.
MITRE/Harris (2022, published 2023) (online survey, 2050 responses)
AI should be regulated (82%).
AI is safe and secure (48%).
78% “concerned about AI being used for malicious intent.”
YouGov (Jan 27, 2023) (online survey, 1000 responses)
AI-based text generation: 13% good for society, 36% bad for society.
Jobs: 14% “advancements in AI will overall lead to there being more jobs”; 46% fewer.
-
Optimism: 9% AI will do more good than harm, 41% more harm than good.
Economy: 19% AI will help jobs and the economy, 73% will hurt.
Quality of life: 34% AI will help quality of life, 56% will hurt.
Existential threat: 55% worried, 44% not worried.
ChatGPT: 60% have heard of ChatGPT or similar systems.
Regulation: 55% favor “a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices,” 41% oppose.
-
-
“The government should take action to prevent the potential loss of jobs due to AI”: 64% agree, 19% disagree
“Increased use of AI will lead to more income inequality and a more polarized society”: 50% agree, 26% disagree
Public First (Mar 19, 2023) (online survey, 2052 responses) (note that Public First may have an agenda)
“Artificial Intelligence (AI) is developing faster than I expected”: 42%, 6% slower than expected, 35% about as quickly as expected.
32% “AI companies should only be allowed to train their models on text or images where they have explicit permission to do so from the original creator,” 19% “on text or images where the creator has not explicitly opted out of their work being used in this way,” 21% “on any text or images that are publicly available.”
Unemployment: 52% AI will increase unemployment, 11% decrease, 21% neither.
55% “governments should try to prevent human jobs from being taken over by AIs or robots,” 29% should not.
Given 7 potential dangers over the next 50 years, AI is 6th on worry (56% worried) and 7th on “risk that it could lead to a breakdown in human civilization” (28% think there is a real risk).
Given 7 potential risks from advanced AI, increasing unemployment is seen as the most important.
11% we should accelerate AI development, 33% slow, 39% continue around the same pace.
Potential policy: “Banning new research into AI”: 24% good idea, 48% bad idea.
Potential policy: “Increasing government funding of AI research”: 31% good idea, 41% bad idea.
Potential policy: “Creating a new government regulatory agency similar to the Food and Drug Administration (FDA) to regulate the use of new AI models”: 50% good idea, 23% bad idea.
Potential policy: “Introducing a new tax on the use of AI models”: 33% good idea, 35% bad idea.
Potential policy: “Increasing the use of AI in the school curriculum”: 31% good idea, 43% bad idea.
Risk that an advanced AI causes humanity to go extinct in the next hundred years: median 1% (given six options, of which 1% was third-lowest).
For respondents who said less than 1% on the above, the most common reason was “human civilization is more likely to be destroyed by other factors (eg climate change, nuclear war)” (53%).
-
56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
“Creating a dedicated federal agency to regulate AI”: 62% support, 27% oppose (after reading a neutral description).
66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.
“A law that requires companies to be transparent about the data they use to train their AI”: 79% support, 11% oppose (after reading a pro-transparency description).
YouGov (
1,
2,
3) (Apr 3, 2023) (online survey, 20810 responses)
6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.
69% support “a six-month pause on some kinds of AI development”; 13% oppose (but they were asked after reading a pro-pause message).
AI causing human extinction: 46% concerned, 40% not.
AI and the End of Humanity (Apr 11, 2023) (online survey of citizens, 1000 responses) (this is a follow-up to the previous survey with some modified questions)
AI causing human extinction: 46% concerned, 47% not.
A six-month pause on some kinds of AI development: 58% support, 23% oppose.
-
Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
Regulation: 70% yes, 21% no (“Do you think AI should be regulated by a federal agency (similarly to how the FDA regulates the approval of drugs and medical devices)?”).
Worry: “In your daily life, how much do you worry about the negative effects Al could have on your life or on society more broadly?” 1% “Nearly all the time,” 7% “A lot,” 21% “A fair amount,” 44% “Only a little bit,” 27% “Not at all.”
Extinction: “We estimate that only 9% of the population think extinction from AI to be moderately likely or more over the next 10 years. This increases to 22% for the next 50 years.” See link for details.
“How likely do you think it is that Al will eventually become more intelligent than people?” 10% “It already is more intelligent than people,” 15% “Extremely likely,” 19% “Highly likely,” 22% “Moderately likely,” 15% “Only slightly likely,” 15% “Not at all likely”
49% AI will do more good than harm, vs 31% more harm than good.
Ipsos (Apr 24, 2023) (online survey, 1008 responses)
Concern about the impact of AI on jobs and society: 71% concerned, 23% not concerned.
Favorability: 39% favorable on AI, 43% unfavorable.
Trust in companies to develop AI “carefully and with the public's well-being in mind”: 2% a great deal, 21% somewhat, 36% a little, 39% not at all.
“Unchecked development of AI” is a bigger risk (75%), “Government regulation slowing down the development of AI” is a bigger risk (21%).
Fox News (Apr 24, 2023) (phone survey of registered voters, 1004 responses)
38% AI is a good thing for society; 46% a bad thing.
“How important do you think it is for the federal government to regulate artificial intelligence technology?”: 35% very, 41% somewhat, 13% not very, 8% not at all.
“How much confidence do you have in the federal government’s ability to properly regulate artificial intelligence technology?”: 10% a great deal, 29% some, 37% not much, 22% none at all.
Ipsos (Apr 26, 2023) (online survey, 1120 responses)
Role government should have in the oversight of AI: 38% a major role, 49% minor, 13% no role.
Government actions: Guidelines that would require people to be notified when they are interacting with an AI system: 81% support, 10% oppose.
Government actions: Requiring companies to disclose information about their AI systems, such as data sources, training processes, and algorithmic decision-making methods: 77% support, 12% oppose.
Government actions: Requiring AI developers to obtain licenses or certifications: 74% support, 12% oppose.
Government actions: Establishment of a task force to study the ethical use of AI: 71% support, 16% oppose.
Government actions: Establishment of an oversight body specifically dedicated to AI: 69% support, 18% oppose.
Government actions: Regulating what data AI can use to train itself: 66% support, 18% oppose.
Government actions: New definitions of copyright that govern who “owns” works created by AI: 64% support, 16% oppose.
Government actions: Funding public education and awareness campaigns about AI: 62% support, 23% oppose.
Government actions: Funding research into AI and its uses: 62% support, 23% oppose.
Government actions: Providing tax breaks and other incentives for companies who use AI responsibly: 40% support, 41% oppose.
“The potential benefits of AI, such as increased efficiency and productivity, outweigh the potential job loss”: 43% agree, 42% disagree.
“AI will create new jobs and opportunities to make up for the jobs that are lost”: 39% agree, 39% disagree.
“The government should take action to prevent the potential loss of jobs due to AI”: 63% agree, 26% disagree.
-
-
“Just under seven in ten Americans say they are concerned about the increased use of artificial intelligence (AI) (68%). About two in three Americans say that AI will have unpredictable consequences that we ultimately won’t be able to control (67%), and half (52%) say AI is bad for humanity. Three in five Americans believe the uncontrollable effects of AI could be so harmful that it would risk humankind’s future (61%), a similar percentage say that companies replacing workers with AI should pay a penalty to offset the increased unemployment (62%).”
Quinnipiac (May 22, 2023) (phone survey, 1819 responses)
Fox News (May 22, 2023) (phone survey of registered voters, 1001 responses)
Concern: 25% extremely, 31% very, 34% not very, 8% not at all.
Common open-ended reactions to AI are “Afraid / Scared / Dangerous” (16%), “Do not like it / General negative / Bad idea” (11%), and “Concern / Hesitancy / Distrust / Caution” (8%).
“Thinking about the next few years, how much do you think artificial intelligence will change the way we live in the United States”: 43% a lot, 43% some, 9% not much, 3% not at all.
-
Regulation: 35% AI should be heavily regulated by government, 37% somewhat regulated, 8% not regulated.
AI-based text generation: 19% good for society, 34% bad for society (up from 13% and 36% respectively in January).
-
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”: 59% agree (26% disagree), 58% support (22% oppose).
“Probability that the development of artificial intelligence will lead to the extinction of humanity by the year 2100”: median 15%.
Worry: 1% nearly all the time, 7% a lot, 24% a fair amount, 44% only a little bit, 24% not at all (vs 1%, 7%, 21%, 44%, and 27%, respectively, in April).
-
Effects on society: 29% positive, 35% negative, 25% neither.
12% AI will increase jobs available in the US, 54% decrease, 14% no effect.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”: 25% strongly agree, 30% somewhat agree, 12% somewhat disagree, 6% strongly disagree.
A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
Ipsos (Jun 18, 2023) (online survey, 1023 responses)
15% “trust the groups and companies developing AI systems to do so responsibly,” 83% don't trust.
27% “trust the federal government to regulate AI,” 72% don't trust.
-
70% AI will bring a new revolution in technology; 30% it will not make much difference.
74% AI can be dangerous; 26% fears are overblown.
Out of five possible risks of AI, respondents see as most real spreading misinformation (79%) and creating massive fraud (75%).
Out of five possible benefits of AI, respondents see as most real robots that help people (69%).
54% fearful, 46% optimistic (after reading about potential risks and benefits).
37% AI will add jobs, 63% destroy jobs.
Regulation: 79% we need more, 21% less.
AI Policy Institute / YouGov (
1,
2) (Jul 21, 2023) (online survey of voters, 1001 responses) (note that the AI Policy Institute is a pro-caution advocacy organization)
21% excited, 62% concerned, 16% neutral.
A federal agency regulating the use of artificial intelligence: 56% support, 14% oppose.
Given 7 possible risks from AI, respondents are most concerned about “AI being used to hack your personal data” (81%).
82% “We should go slowly and deliberately,” 8% “We should speed up development” (after reading brief arguments).
60% “AI will undermine meaning in our lives”; 19% enhance meaning (after reading brief arguments).
62% “AI will make people dumber”; 17% smarter (after reading brief arguments).
“It would be a good thing if AI progress was stopped or significantly slowed”: 62% agree, 26% disagree.
“I feel afraid of artificial intelligence”: 55% agree, 39% disagree.
“Tech company executives can’t be trusted to self-regulate the AI industry”: 82% agree, 13% disagree.
“AI could accidentally cause a catastrophic event”: 28% extremely likely, 25% very, 33% somewhat, 7% not at all.
“Threat to the existence of the human race”: 24% extremely concerned, 19% very, 33% somewhat, 18% not at all.
77% the next 2 years of AI progress will be faster, 6% slower.
AGI: 54% within 5 years, 24% more than 5 years.
72% “We should slow down the development and deployment of artificial intelligence,” 12% “We should more quickly develop and deploy artificial intelligence.”
“A legally enforced pause on advanced artificial intelligence research”: 49% support, 25% oppose.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”: 70% agree, 11% disagree.
Policy proposal: “any advanced AI model should be required to demonstrate safety before they are released”: 65% support, 11% oppose (after reading brief arguments).
Policy proposal: “any organization producing advanced AI models must require a license, [] all advanced models must be evaluated for safety, and [] all models must be audited by independent experts”: 64% support, 13% oppose (after reading brief arguments).
Policy proposal: regulate supply chain for specialized chips: 56% support, 18% oppose (after reading brief arguments).
Policy proposal: “require all AI generated images to contain proof that they were generated by a computer”: 76% support, 9% oppose (after reading brief arguments).
Policy proposal: international regulation of military AI: 60% support, 17% oppose (after reading brief arguments).
58% prefer thorough regulation vs 15% no regulation (after reading brief arguments).
41% prefer international regulation vs 24% national regulation (after reading brief arguments).
57% prefer government regulation vs 18% industry self-regulation (after reading brief arguments).
Ipsos (Jul 24, 2023) (online survey, 1000 responses)
“AI companies committing to internal and external security testing before their release”: 77% support, 14% oppose.
“AI companies committing to sharing information across the tech industry, government, civil society, and academia to help manage AI risk”: 69% support, 20% oppose.
“AI companies agreeing to investing in cybersecurity and insider threat safeguard to protect model weights, the most important part of an AI system”: 72% support, 16% oppose.
“AI companies committing to using third-parties to help discover and report system vulnerabilities after AI is released”: 66% support, 20% oppose.
“AI companies developing systems so that users know when content is created by AI”: 76% support, 15% oppose.
“AI companies publicly reporting their systems capabilities and their appropriate and inappropriate use”: 75% support, 15% oppose.
“AI companies committing to conduct research on risk that AI can pose to society, like bias, discrimination, and protecting privacy”: 70% support, 17% oppose.
“AI companies committing to developing AI systems to help address society’s biggest challenges”: 64% support, 23% oppose.
White House voluntary commitments: “I trust these tech companies to keep their commitment to responsibly develop AI”: 48% agree, 42% disagree.
White House voluntary commitments: “Congress should pass a law making these voluntary commitments legal requirements”: 71% agree, 16% disagree.
White House voluntary commitments: “Following these commitments to responsibly develop AI makes a positive future with AI more likely”: 69% agree, 19% disagree.
-
-
Enthusiasm: 16% very, 25% somewhat, 28% not too, 30% not at all.
Concern: 31% very, 35% somewhat, 14% not too, 8% not at all.
34% Humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart.
“Do you believe there is a threshold in the evolution of AI after which humans cannot take back control of the artificial intelligence they've created?”: 21% “yes, definitely”; 44% “yes, probably”; 22% “no, probably not”; 13% “no, definitely not.” Of those who say yes, median time is “next five years,” the second-shortest of five options.
Optimism about where society will land with AI: 26% optimistic, 37% pessimistic, 37% neither.
YouGov (Aug 20, 2023) (online survey, 1000 responses)
Types of AI that first come to mind are robots (20%), text generation tools (13%), chatbots (10%), and virtual personal assistants (9%).
Effects of AI on society: 20% positive, 27% equally positive and negative, 37% negative.
-
23% “we should open source powerful AI models,” 47% we should not (after reading brief arguments).
71% prefer caution to reduce risks, 29% prefer avoiding too much caution to see benefits.
YouGov (Aug 27, 2023) (online survey of citizens, 1000 responses)
Effects of AI on society: 15% positive, 40% negative, 29% equally positive and negative.
Biggest concern is “The spread of misleading video and audio deep fakes” (85% concerned).
AI should be much more regulated (52%), be somewhat more regulated (26%), have no change in regulation (6%), be somewhat less regulated (1%), be much less regulated (2%).
A six-month pause on some kinds of AI development: 36% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose (similar to April and June).
Data for Progress (Sep 9, 2023) (online survey of likely voters, 1191 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results)
12% AI will increase jobs in the US, 60% decrease, 13% no effect.
“Creating a dedicated federal agency to regulate AI”: 62% support, 27% oppose (after reading a neutral description) (identical to their results from March).
Gallup (Sep 13, 2023) (online survey, 5458 responses)
-
-
-
Deltapoll (Oct 27, 2023) (online survey, 1126 US responses)
-
-
The rest of this page up to “Demographic analysis” is quite out of date.
Of these surveys, the most relevant to future AI capabilities and outcomes are Northeastern/Gallup, GovAI 2018, YouGov (Apr 3), and the first 2/5 sections of Stevens/MC.
Interpreting surveys
Similar survey questions in different surveys involve slightly different populations, survey method, context, and especially question phrasing. So differences between answers to similar questions may arise from such confounders. Monmouth repeated some of their questions from their 2015 survey in their 2023 survey; this better enables comparing responses over time.
The different surveys have substantial differences between responses to similar questions, in ways that sometimes suggest that responses are sensitive to minor differences in context or phrasing. For example, Brookings (2018b) elicits much longer timelines than GovAI 2018; Northeastern, Brookings (2018a), GovAI 2018, and Stevens/MC disagree on the effect of AI on jobs; many surveys disagree on positivity/excitement/support vs worry/concern; and MC finds that 50% believe AI is humanity's greatest threat while GovAI 2018 finds that respondents rank AI around last on a list of 15 potential global risks. And if prompted many respondents suggest that AI is an urgent issue (e.g., half say AI is humanity's greatest existential threat, MC; most think probably high-level machine intelligence in 10 years, GovAI 2018) but most other responses suggest that they are not thinking of AI as so transformative (e.g., seeing surveillance and privacy as much greater priorities than value alignment and critical AI systems failure, GovAI 2018). So at least some respondents seem to say AI will be human-level or have major consequences when prompted to think about that question, even though that does not affect their responses to other questions.
See also Baobao Zhang's Public opinion lessons for AI regulation (2019), which interprets survey data on applications of AI.
Some interesting results
At least when prompted to consider it, most respondents think AI will reach human-level and most think AI will have important consequences. This is largely inconsistent with the proposition that most people would dismiss the possibility of AI having humanlike capabilities or profound consequences in the next few decades. AI is not yet a public-sphere issue, but many respondents say that AI is likely to be very capable and have very important consequences. There seems to be much less public discourse on AI than there might be in the future; future discourse could emphasize a variety of possible actors, issues, and framings.
Stevens/MC respondents say it is more important for “AI developers” to learn about “the uses, limitations, and ethical considerations of AI” than any other group listed, including government regulators and business leaders. This is evidence for the proposition that Americans care a lot about who develops AI (but is somewhat confounded by the “uses” and “limitations” part).
Open questions
An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear “artificial intelligence” or “machine learning.” (For example, perhaps Americans who hear “artificial intelligence” mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, “data” & surveillance & privacy, “algorithms” & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on AI governance challenges for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, “real” intelligence, etc.
Demographic analysis
All of the listed surveys have some demographic breakdowns. We did not investigate demographic differences carefully; the only obvious trend across surveys is that college-educated respondents are somewhat more optimistic and less concerned about AI.
Other surveys
YouGov's AI and the End of Humanity (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it “would cause the end of the human race on Earth”) and 5th in terms of respondents' concern (46% concerned that it “will cause the end of the human race on Earth”). Rethink Priorities's US public opinion of AI policy and risk (2023) asked a similar question, giving respondents a list of 5 possible threats plus “Some other cause.” AI ranked last, with just 4% choosing it as most likely to cause human extinction. Public First asked about 7 potential dangers over the next 50 years; AI was 6th on worry (56% worried) and 7th on “risk that it could lead to a breakdown in human civilization” (28% think there is a real risk). Fox News asked about concern about 16 issues; AI was 13th (56% concerned).
Alexia Georgiadis's The Effectiveness of AI Existential Risk Communication to the American and Dutch Public (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results and Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure (2023). Note that the Existential Risk Observatory is a pro-caution advocacy organization.
YouGov has a simple tracking poll called robot intelligence.
Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including Brookings (2018c), Northeastern/Gallup's Facing the Future (2019), and Pew's AI in Hiring and Evaluating Workers and A majority of Americans have heard of ChatGPT, but few have tried it themselves (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's Automation in Everyday Life (2017) and Gallup's More U.S. Workers Fear Technology Making Their Jobs Obsolete (2023).
On the history of AI in the mass media, see Fast and Horvitz 2017, Ouchchy et al. 2020, Chuan et al. 2019, and Zhai et al. 2020.
We are uncertain about the quality of SYZYGY 2017, Blumberg Capital 2019, Jones-Skiena 2020--2022, and Campaign for AI Safety (2023a, 2023b) (note that Campaign for AI Safety is a pro-caution advocacy organization).
Author: Zach Stein-Perlman