User Tools

Site Tools


responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai

Surveys of US public opinion on AI

Published 30 November 2022

We are aware of seven high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI. Methodologies, questions, and responses vary.

Details

List of surveys

  • Morning Consult (2017) (online survey, 2200 responses)
    • Safety: 41% AI is safe, 37% AI is unsafe
    • Economy: 28% AI will help the economy, 36% AI will hurt the economy
    • Support: 51% support AI research; 32% oppose AI research.
    • AI is humanity's greatest existential threat: 50% agree; 31% disagree.
    • Regulation: there should be national regulations on AI (71% agree, 14% disagree) and there should be international regulations on AI (67% agree, 16% disagree).
  • Northeastern/Gallup: Optimism and Anxiety (2017, published 2018) (mail survey, 3297 responses)
    • “AI will fundamentally change the way people work and live in the next 10 years” (76%), and positively (77% of those 76%).
    • AI has a positive impact on life currently (79%) and in the next 10 years (74%).
    • Increased use of AI “will eliminate more jobs than it creates” (73%).
  • Brookings (2018a) (online survey, 1535 responses)
    • Optimism: 12% very optimistic about AI, 29% somewhat optimistic, 27% not very optimistic.
    • Worry: 12% very worried about AI, 27% somewhat worried, 34% not very worried.
    • Jobs: 12% AI will create jobs, 13% no effect on jobs, 38% reduce jobs.
    • Life: 34% AI will make lives easier, 13% harder.
    • Threat: 32% AI is a threat to humans, 24% no threat to humans.
  • Brookings (2018b) (online survey, 2021 responses)
    • “The survey asked how likely robots are to take over most human activities within the next 30 years. Nineteen percent feel this was very likely, 33 percent believes this is somewhat likely, 23 percent feel it is not very likely, and 25 percent were not sure.”
  • GovAI: American Attitudes and Trends (2018, published 2019) (online survey, 2387 responses matched down to 2000)
    • Developing AI: 41% support; 22% oppose (after reading a short explanation).
    • Surveillance/privacy and cyberattacks are seen as likely to be problematic; autonomous weapons are also seen as important but less likely to be problematic; value alignment and critical AI systems failure are not seen as priorities.
    • There is more trust to develop AI in university researchers and the US military than in the rest of the US government and technology companies (especially Meta).
    • Automation and AI will create more jobs than they will eliminate: 27% agree; 49% disagree.
    • “The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.”
    • Developing high-level machine intelligence: 31% support; 27% oppose.
    • Outcomes of high-level machine intelligence: 5% extremely good, 21% on balance good, 21% neutral, 22% on balance bad, 12% extremely bad (including human extinction), 18% don't know.
    • “Nearly half of adults (48%) believe the perceived positives of greater AI adoption in everyday life outweigh the perceived negatives, while 29% believe the opposite.”
    • Respondents are most concerned about privacy (74%), irresponsible use (72%), and jobs (71%), and least concerned about gender bias (39%) and racial bias (47%); additionally, many are concerned about AI “becoming uncontrollable” (67%).
    • Misuse and loss of privacy are seen as more likely risks, but majorities also believe that “AI will control too much of everyday life” (63%), “AI will become smarter than humans” (52%), and “humans won’t be able to control AI” (51%).
    • Respondents are more skeptical of “Increased economic prosperity” as a positive outcome of AI than the other eleven positive outcomes listed.
  • Pew: AI and Human Enhancement (2021, published 2022) (online survey, 10260 responses)
    • “The increased use of artificial intelligence computer programs in daily life makes them feel”: 18% more excited than concerned, 45% equally excited and concerned, 37% more concerned than excited.
    • Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.

Of these surveys, the most relevant to future AI capabilities and outcomes are Northeastern/Gallup, GovAI, and the first 2/5 sections of Stevens/MC.

Interpreting surveys

Similar survey questions in different surveys involve slightly different populations, survey method, context, and especially question phrasing. So differences between answers to similar questions may arise from such confounders.

The different surveys have substantial differences between responses to similar questions, in ways that sometimes suggest that responses are sensitive to minor differences in context or phrasing. For example, Brookings (2018b) elicits much longer timelines than GovAI; Northeastern, Brookings (2018a), GovAI, and Stevens/MC disagree on the effect of AI on jobs; many surveys disagree on positivity/excitement/support vs worry/concern; and MC finds that 50% believe AI is humanity's greatest threat while GovAI finds that respondents rank AI around last on a list of 15 potential global risks. And if prompted many respondents suggest that AI is an urgent issue (e.g., half say AI is humanity's greatest existential threat, MC; most think probably high-level machine intelligence in 10 years, GovAI) but most other responses suggest that they are not thinking of AI as so transformative (e.g., seeing surveillance and privacy as much greater priorities than value alignment and critical AI systems failure, GovAI). So at least some respondents seem to say AI will be human-level or have major consequences when prompted to think about that question, even though that does not affect their responses to other questions.

See also Baobao Zhang's Public opinion lessons for AI regulation (2019), which interprets survey data on applications of AI.

Some interesting results

At least when prompted to consider it, most respondents think AI will reach human-level and most think AI will have important consequences. This is largely inconsistent with the proposition that most people would dismiss the possibility of AI having humanlike capabilities or profound consequences in the next few decades. AI is not yet a public-sphere issue, but many respondents say that AI is likely to be very capable and have very important consequences. There seems to be much less public discourse on AI than there might be in the future; future discourse could emphasize a variety of possible actors, issues, and framings. Stevens/MC respondents say it is more important for “AI developers” to learn about “the uses, limitations, and ethical considerations of AI” than any other group listed, including government regulators and business leaders. This is evidence for the proposition that Americans care a lot about who develops AI (but is somewhat confounded by the “uses” and “limitations” part).

Open questions

An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear “artificial intelligence” or “machine learning.” (For example, perhaps Americans who hear “artificial intelligence” mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, “data” & surveillance & privacy, “algorithms” & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI's section on AI governance challenges for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, “real” intelligence, etc.

Demographic analysis

All of the listed surveys have some demographic breakdowns. We did not investigate demographic differences carefully; the only obvious trend across surveys is that college-educated respondents are somewhat more optimistic and less concerned about AI.

Other surveys

Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including Brookings (2018c) and Northeastern/Gallup's Facing the Future (2019). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's Automation in Everyday Life (2017).

We are uncertain about the quality of SYZYGY 2017, Blumberg Capital 2019, and Jones-Skiena 2020--2022.

responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.txt · Last modified: 2023/01/22 00:11 by zachsteinperlman