User Tools

Site Tools


responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai

Surveys of US public opinion on AI

Published 30 November 2022; last updated 27 May 2023

We are aware of fifteen high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI. Methodologies, questions, and responses vary.

Details

List of surveys

    • Optimism: 11% AI will do more good than harm, 42% more harm than good.
    • Economy: 19% AI will help jobs and the economy, 72% will hurt.
    • Quality of life: 35% AI will help quality of life, 54% will hurt.
    • Existential threat: 44% worried, 55% not worried.
  • Morning Consult (2017) (online survey, 2200 responses)
    • Safety: 41% AI is safe, 37% AI is unsafe.
    • Economy: 28% AI will help the economy, 36% AI will hurt the economy.
    • Support: 51% support AI research; 32% oppose AI research.
    • AI is humanity's greatest existential threat: 50% agree; 31% disagree.
    • Regulation: there should be national regulations on AI (71% agree, 14% disagree) and there should be international regulations on AI (67% agree, 16% disagree).
  • Northeastern/Gallup: Optimism and Anxiety (2017, published 2018) (mail survey, 3297 responses)
    • “AI will fundamentally change the way people work and live in the next 10 years” (76%), and positively (77% of those 76%).
    • AI has a positive impact on life currently (79%) and in the next 10 years (74%).
    • Increased use of AI “will eliminate more jobs than it creates” (73%).
  • Brookings (2018a) (online survey, 1535 responses)
    • Optimism: 12% very optimistic about AI, 29% somewhat optimistic, 27% not very optimistic.
    • Worry: 12% very worried about AI, 27% somewhat worried, 34% not very worried.
    • Jobs: 12% AI will create jobs, 13% no effect on jobs, 38% reduce jobs.
    • Life: 34% AI will make lives easier, 13% harder.
    • Threat: 32% AI is a threat to humans, 24% no threat to humans.
  • Brookings (2018b) (online survey, 2021 responses)
    • “The survey asked how likely robots are to take over most human activities within the next 30 years. Nineteen percent feel this was very likely, 33 percent believes this is somewhat likely, 23 percent feel it is not very likely, and 25 percent were not sure.”
  • GovAI: American Attitudes and Trends (2018, published 2019) (online survey, 2387 responses matched down to 2000)
    • Developing AI: 41% support; 22% oppose (after reading a short explanation).
    • Surveillance/privacy and cyberattacks are seen as likely to be problematic; autonomous weapons are also seen as important but less likely to be problematic; value alignment and critical AI systems failure are not seen as priorities.
    • There is more trust to develop AI in university researchers and the US military than in the rest of the US government and technology companies (especially Meta).
    • Automation and AI will create more jobs than they will eliminate: 27% agree; 49% disagree.
    • “The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.”
    • Developing high-level machine intelligence: 31% support; 27% oppose.
    • Outcomes of high-level machine intelligence: 5% extremely good, 21% on balance good, 21% neutral, 22% on balance bad, 12% extremely bad (including human extinction), 18% don't know.
  • University of Wisconsin-Madison: U.S. public attitudes on artificial intelligence (2020) (online survey, 2700 responses, 41% complete)
    • “Americans support AI overall, but think there will be unintended consequences” (see section 8).
    • “Majority of Americans think there are both risks and benefits of AI” (see sections 9-10).
    • “Americans are unsure about the likelihood that AI will benefit certain aspects of society” “… but Americans are more likely to think that AI will have negative impacts on themselves and society” (see section 13).
    • “Americans distrust Facebook, the White House, and Congress to keep society's best interest in mind during the development of AI, but trust university and industry scientists the most” (see section 15).
    • Among various “potential uses of AI,” respondents are most concerned about misinformation and automation (see section 16).
    • “Nearly half of adults (48%) believe the perceived positives of greater AI adoption in everyday life outweigh the perceived negatives, while 29% believe the opposite.”
    • Respondents are most concerned about privacy (74%), irresponsible use (72%), and jobs (71%), and least concerned about gender bias (39%) and racial bias (47%); additionally, many are concerned about AI “becoming uncontrollable” (67%).
    • Misuse and loss of privacy are seen as more likely risks, but majorities also believe that “AI will control too much of everyday life” (63%), “AI will become smarter than humans” (52%), and “humans won’t be able to control AI” (51%).
    • Respondents are more skeptical of “Increased economic prosperity” as a positive outcome of AI than the other eleven positive outcomes listed.
  • Pew: AI and Human Enhancement (2021, published 2022) (online survey, 10260 responses)
    • “The increased use of artificial intelligence computer programs in daily life makes them feel”: 18% more excited than concerned, 45% equally excited and concerned, 37% more concerned than excited.
    • Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.
  • MITRE-Harris (2022, published 2023) (online survey, 2050 responses)
    • AI should be regulated (82%).
    • AI is safe and secure (48%).
    • 78% “concerned about AI being used for malicious intent.”
  • Monmouth: Artificial Intelligence Use Prompts Concerns (2023) (phone survey, 805 responses)
    • Optimism: 9% AI will do more good than harm, 41% more harm than good.
    • Economy: 19% AI will help jobs and the economy, 73% will hurt.
    • Quality of life: 34% AI will help quality of life, 56% will hurt.
    • Existential threat: 55% worried, 44% not worried.
    • ChatGPT: 60% have heard of ChatGPT or similar systems.
    • Regulation: 55% favor “a federal agency regulate the use of artificial intelligence similar to how the FDA regulates the approval of drugs and medical devices,” 41% oppose.
  • YouGov (1, 2, 3) (2023) (online survey, 20810 responses) (see also YouGov's follow-up, AI and the End of Humanity)
    • 6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.
    • 69% support “a six-month pause on some kinds of AI development”; 13% oppose (but they were asked after reading a pro-pause message).
    • AI causing human extinction: 46% concerned, 40% not.
    • “Artificial Intelligence (AI) is a technology that requires careful management”: 91% agree; 3% disagree (83% and 5% in 2018)
  • Data for Progress: Voters Are Concerned About ChatGPT and Support More Regulation of AI (2023) (online survey of likely voters, 1194 responses) (note that Data for Progress has an agenda and whether it releases a survey may depend on its results)
    • 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
    • “Creating a dedicated federal agency to regulate AI”: 62% support, 27% oppose (after reading a neutral description).
    • 66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.
    • “A law that requires companies to be transparent about the data they use to train their AI”: 79% support, 11% oppose (after reading a pro-transparency description).
  • Rethink Priorities: US public opinion of AI policy and risk (2023) (online survey, 2444 responses) (note that Rethink Priorities has an agenda and whether it releases a survey may depend on its results)
    • Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
    • Regulation: 70% yes, 21% no (“Do you think Al should be regulated by a federal agency (similarly to how the FDA regulates the approval of drugs and medical devices?”).
    • Worry: “In your daily life, how much do you worry about the negative effects Al could have on your life or on society more broadly?” 1% “Nearly all the time,” 7% “A lot,” 21% “A fair amount,” 44% “Only a little bit,” 27% “Not at all.”
    • Extinction: “We estimate that only 9% of the population think extinction from AI to be moderately likely or more over the next 10 years. This increases to 22% for the next 50 years.” See link for details.
    • “How likely do you think it is that Al will eventually become more intelligent than people?” 10% “It already is more intelligent than people,” 15% “Extremely likely,” 19% “Highly likely,” 22% “Moderately likely,” 15% “Only slightly likely,” 15% “Not at all likely”
    • 49% AI will do more good than harm, vs 31% more harm than good.

Ipsos (1, 2, 3, 4, 5, 6) and YouGov (1, 2) have not yet been added to the above list.

Wisconsin, MITRE-Harris, Monmouth, YouGov, GovAI 2023, Data for Progress, and Rethink Priorities have mostly not yet been integrated into the rest of this page.

Of these surveys, the most relevant to future AI capabilities and outcomes are Northeastern/Gallup, GovAI 2018, YouGov, and the first 2/5 sections of Stevens/MC.

Interpreting surveys

Similar survey questions in different surveys involve slightly different populations, survey method, context, and especially question phrasing. So differences between answers to similar questions may arise from such confounders. Monmouth repeated some of their questions from their 2015 survey in their 2023 survey; this better enables comparing responses over time.

The different surveys have substantial differences between responses to similar questions, in ways that sometimes suggest that responses are sensitive to minor differences in context or phrasing. For example, Brookings (2018b) elicits much longer timelines than GovAI 2018; Northeastern, Brookings (2018a), GovAI 2018, and Stevens/MC disagree on the effect of AI on jobs; many surveys disagree on positivity/excitement/support vs worry/concern; and MC finds that 50% believe AI is humanity's greatest threat while GovAI 2018 finds that respondents rank AI around last on a list of 15 potential global risks. And if prompted many respondents suggest that AI is an urgent issue (e.g., half say AI is humanity's greatest existential threat, MC; most think probably high-level machine intelligence in 10 years, GovAI 2018) but most other responses suggest that they are not thinking of AI as so transformative (e.g., seeing surveillance and privacy as much greater priorities than value alignment and critical AI systems failure, GovAI 2018). So at least some respondents seem to say AI will be human-level or have major consequences when prompted to think about that question, even though that does not affect their responses to other questions.

See also Baobao Zhang's Public opinion lessons for AI regulation (2019), which interprets survey data on applications of AI.

Some interesting results

At least when prompted to consider it, most respondents think AI will reach human-level and most think AI will have important consequences. This is largely inconsistent with the proposition that most people would dismiss the possibility of AI having humanlike capabilities or profound consequences in the next few decades. AI is not yet a public-sphere issue, but many respondents say that AI is likely to be very capable and have very important consequences. There seems to be much less public discourse on AI than there might be in the future; future discourse could emphasize a variety of possible actors, issues, and framings. Stevens/MC respondents say it is more important for “AI developers” to learn about “the uses, limitations, and ethical considerations of AI” than any other group listed, including government regulators and business leaders. This is evidence for the proposition that Americans care a lot about who develops AI (but is somewhat confounded by the “uses” and “limitations” part).

Open questions

An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear “artificial intelligence” or “machine learning.” (For example, perhaps Americans who hear “artificial intelligence” mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, “data” & surveillance & privacy, “algorithms” & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on AI governance challenges for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, “real” intelligence, etc.

Demographic analysis

All of the listed surveys have some demographic breakdowns. We did not investigate demographic differences carefully; the only obvious trend across surveys is that college-educated respondents are somewhat more optimistic and less concerned about AI.

Other surveys

YouGov's AI and the End of Humanity (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it “would cause the end of the human race on Earth”) and 5th in terms of respondents' concern (46% concerned that it “will cause the end of the human race on Earth”). Rethink Priorities's US public opinion of AI policy and risk (2023) asked a similar question, giving respondents a list of 5 possible threats plus “Some other cause.” AI ranked last, with just 4% choosing it as most likely to cause human extinction.

Alexia Georgiadis's The Effectiveness of AI Existential Risk Communication to the American and Dutch Public (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results and Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure (2023). Note that the Existential Risk Observatory is an advocacy organization.

YouGov has a simple tracking poll called robot intelligence.

Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including Brookings (2018c), Northeastern/Gallup's Facing the Future (2019), and Pew's A majority of Americans have heard of ChatGPT, but few have tried it themselves (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's Automation in Everyday Life (2017).

Some surveys are not focused on AI but include one or more relevant questions, including Quinnipiac 2023.

On the history of AI in the mass media, see Fast and Horvitz 2017, Ouchchy et al. 2020, Chuan et al. 2019, and Zhai et al. 2020.

We are uncertain about the quality of SYZYGY 2017, Blumberg Capital 2019, and Jones-Skiena 2020--2022.

Author: Zach Stein-Perlman

responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.txt · Last modified: 2023/05/30 20:50 by zachsteinperlman