Published 30 November 2022
We are aware of seven high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI. Methodologies, questions, and responses vary.
Of these surveys, the most relevant to future AI capabilities and outcomes are Northeastern/Gallup, GovAI, and the first 2/5 sections of Stevens/MC.
Similar survey questions in different surveys involve slightly different populations, survey method, context, and especially question phrasing. So differences between answers to similar questions may arise from such confounders.
The different surveys have substantial differences between responses to similar questions, in ways that sometimes suggest that responses are sensitive to minor differences in context or phrasing. For example, Brookings (2018b) elicits much longer timelines than GovAI; Northeastern, Brookings (2018a), GovAI, and Stevens/MC disagree on the effect of AI on jobs; many surveys disagree on positivity/excitement/support vs worry/concern; and MC finds that 50% believe AI is humanity's greatest threat while GovAI finds that respondents rank AI around last on a list of 15 potential global risks. And if prompted many respondents suggest that AI is an urgent issue (e.g., half say AI is humanity's greatest existential threat, MC; most think probably high-level machine intelligence in 10 years, GovAI) but most other responses suggest that they are not thinking of AI as so transformative (e.g., seeing surveillance and privacy as much greater priorities than value alignment and critical AI systems failure, GovAI). So at least some respondents seem to say AI will be human-level or have major consequences when prompted to think about that question, even though that does not affect their responses to other questions.
See also Baobao Zhang's Public opinion lessons for AI regulation (2019), which interprets survey data on applications of AI.
At least when prompted to consider it, most respondents think AI will reach human-level and most think AI will have important consequences. This is largely inconsistent with the proposition that most people would dismiss the possibility of AI having humanlike capabilities or profound consequences in the next few decades. AI is not yet a public-sphere issue, but many respondents say that AI is likely to be very capable and have very important consequences. There seems to be much less public discourse on AI than there might be in the future; future discourse could emphasize a variety of possible actors, issues, and framings. Stevens/MC respondents say it is more important for “AI developers” to learn about “the uses, limitations, and ethical considerations of AI” than any other group listed, including government regulators and business leaders. This is evidence for the proposition that Americans care a lot about who develops AI (but is somewhat confounded by the “uses” and “limitations” part).
An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear “artificial intelligence” or “machine learning.” (For example, perhaps Americans who hear “artificial intelligence” mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, “data” & surveillance & privacy, “algorithms” & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI's section on AI governance challenges for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, “real” intelligence, etc.
All of the listed surveys have some demographic breakdowns. We did not investigate demographic differences carefully; the only obvious trend across surveys is that college-educated respondents are somewhat more optimistic and less concerned about AI.
Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including Brookings (2018c) and Northeastern/Gallup's Facing the Future (2019). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's Automation in Everyday Life (2017).
We are uncertain about the quality of SYZYGY 2017, Blumberg Capital 2019, and Jones-Skiena 2020--2022.