User Tools

Site Tools


responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai [2023/09/10 21:12]
zachsteinperlman [List of surveys]
responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai [2024/01/29 22:32] (current)
harlanstewart
Line 1: Line 1:
 +/*
 +EDITOR NOTES (publicly accessible)
 +
 +-Harlan: this page needs work. Need to add details to some of the items at the end of the list, and update the discussion part after the list
 +*/
 ====== Surveys of US public opinion on AI ====== ====== Surveys of US public opinion on AI ======
  
-//Published 30 November 2022; last updated 10 September 2023//+//Published 30 November 2022; last updated 9 January 2024//
  
-We are aware of 33 high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI or policy responses. Methodologies, questions, and responses vary.+We are aware of 45 high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI or policy responses. Methodologies, questions, and responses vary.
  
 ===== Details ===== ===== Details =====
Line 53: Line 58:
     * Respondents are more skeptical of "Increased economic prosperity" as a positive outcome of AI than the other eleven positive outcomes listed.     * Respondents are more skeptical of "Increased economic prosperity" as a positive outcome of AI than the other eleven positive outcomes listed.
   * [[https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/|Pew: AI and Human Enhancement]] (2021, published 2022) (online survey, 10260 responses)   * [[https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/|Pew: AI and Human Enhancement]] (2021, published 2022) (online survey, 10260 responses)
-    * "The increased use of artificial intelligence computer programs in daily life makes them feel": 18% more excited than concerned, 45% equally excited and concerned, 37% more concerned than excited.+    * "The increased use of artificial intelligence computer programs in daily life makes them feel": 18% excited, 37% concerned, 45% equally excited and concerned.
     * Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.     * Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.
   * [[https://www.mitre.org/sites/default/files/2023-02/PR-23-0454-MITRE-Harris-Poll-Survey-on-AI-Trends_0.pdf|MITRE/Harris]] (2022, published 2023) (online survey, 2050 responses)   * [[https://www.mitre.org/sites/default/files/2023-02/PR-23-0454-MITRE-Harris-Poll-Survey-on-AI-Trends_0.pdf|MITRE/Harris]] (2022, published 2023) (online survey, 2050 responses)
Line 79: Line 84:
     * Unemployment: 52% AI will increase unemployment, 11% decrease, 21% neither.     * Unemployment: 52% AI will increase unemployment, 11% decrease, 21% neither.
     * 55% "governments should try to prevent human jobs from being taken over by AIs or robots," 29% should not.     * 55% "governments should try to prevent human jobs from being taken over by AIs or robots," 29% should not.
-    * //See 145–152 and 156–164.//+    * Given 7 potential dangers over the next 50 years, AI is 6th on worry (56% worried) and 7th on "risk that it could lead to a breakdown in human civilization" (28% think there is a real risk). 
 +    * Given 7 potential risks from advanced AI, increasing unemployment is seen as the most important. 
 +    * 11% we should accelerate AI development, 33% slow, 39% continue around the same pace. 
 +    * Potential policy: "Banning new research into AI": 24% good idea, 48% bad idea. 
 +    * Potential policy: "Increasing government funding of AI research": 31% good idea, 41% bad idea. 
 +    * Potential policy: "Creating a new government regulatory agency similar to the Food and Drug Administration (FDA) to regulate the use of new AI models": 50% good idea, 23% bad idea. 
 +    * Potential policy: "Introducing a new tax on the use of AI models": 33% good idea, 35% bad idea. 
 +    * Potential policy: "Increasing the use of AI in the school curriculum": 31% good idea, 43% bad idea. 
 +    * Risk that an advanced AI causes humanity to go extinct in the next hundred years: median 1% (given six options, of which 1% was third-lowest). 
 +    * For respondents who said less than 1% on the above, the most common reason was "human civilization is more likely to be destroyed by other factors (eg climate change, nuclear war)" (53%).
   * [[https://www.filesforprogress.org/datasets/2023/5/dfp_chatgpt_ai_regulation_tabs.pdf|Data for Progress: Voters Are Concerned About ChatGPT and Support More Regulation of AI]] (Mar 24, 2023) (online survey of likely voters, 1194 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results)   * [[https://www.filesforprogress.org/datasets/2023/5/dfp_chatgpt_ai_regulation_tabs.pdf|Data for Progress: Voters Are Concerned About ChatGPT and Support More Regulation of AI]] (Mar 24, 2023) (online survey of likely voters, 1194 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results)
     * 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.     * 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
Line 85: Line 99:
     * 66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.     * 66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.
     * "A law that requires companies to be transparent about the data they use to train their AI": 79% support, 11% oppose (after reading a pro-transparency description).     * "A law that requires companies to be transparent about the data they use to train their AI": 79% support, 11% oppose (after reading a pro-transparency description).
-  * YouGov ([[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/1|1]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/2|2]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/3|3]]) (Apr 3, 2023) (online survey, 20810 responses) (see also YouGov's follow-up, [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]])+  * YouGov ([[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/1|1]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/2|2]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/3|3]]) (Apr 3, 2023) (online survey, 20810 responses)
     * 6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.     * 6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.
     * 69% support "a six-month pause on some kinds of AI development"; 13% oppose (but they were asked after reading a pro-pause message).     * 69% support "a six-month pause on some kinds of AI development"; 13% oppose (but they were asked after reading a pro-pause message).
     * AI causing human extinction: 46% concerned, 40% not.     * AI causing human extinction: 46% concerned, 40% not.
 +  * [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (Apr 11, 2023) (online survey of citizens, 1000 responses) (this is a follow-up to the previous survey with some modified questions)
 +    * AI causing human extinction: 46% concerned, 47% not.
 +    * A six-month pause on some kinds of AI development: 58% support, 23% oppose.
   * [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|Rethink Priorities: US public opinion of AI policy and risk]] (Apr 14, 2023) (online survey, 2444 responses) (note that Rethink Priorities may have an agenda and whether it releases a survey may depend on its results)   * [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|Rethink Priorities: US public opinion of AI policy and risk]] (Apr 14, 2023) (online survey, 2444 responses) (note that Rethink Priorities may have an agenda and whether it releases a survey may depend on its results)
     * Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).     * Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
Line 106: Line 123:
     * "How much confidence do you have in the federal government’s ability to properly regulate artificial intelligence technology?": 10% a great deal, 29% some, 37% not much, 22% none at all.     * "How much confidence do you have in the federal government’s ability to properly regulate artificial intelligence technology?": 10% a great deal, 29% some, 37% not much, 22% none at all.
   * [[https://www.ipsos.com/en-us/we-are-worried-about-irresponsible-uses-ai|Ipsos]] (Apr 26, 2023) (online survey, 1120 responses)   * [[https://www.ipsos.com/en-us/we-are-worried-about-irresponsible-uses-ai|Ipsos]] (Apr 26, 2023) (online survey, 1120 responses)
-    * //Role of government in the oversight of AI.// +    * Role government should have in the oversight of AI: 38% a major role, 49% minor, 13% no role
-    * //Irresponsible uses.// +    * Government actions: Guidelines that would require people to be notified when they are interacting with an AI system: 81% support, 10% oppose. 
-    * //Government actions to oversee AI.//+    * Government actions: Requiring companies to disclose information about their AI systems, such as data sources, training processes, and algorithmic decision-making methods: 77% support, 12% oppose. 
 +    * Government actions: Requiring AI developers to obtain licenses or certifications: 74% support, 12% oppose. 
 +    * Government actions: Establishment of a task force to study the ethical use of AI: 71% support, 16% oppose. 
 +    * Government actions: Establishment of an oversight body specifically dedicated to AI: 69% support, 18% oppose. 
 +    * Government actions: Regulating what data AI can use to train itself: 66% support, 18% oppose. 
 +    * Government actions: New definitions of copyright that govern who "owns" works created by AI: 64% support, 16% oppose. 
 +    * Government actions: Funding public education and awareness campaigns about AI: 62% support, 23% oppose. 
 +    * Government actions: Funding research into AI and its uses: 62% support, 23% oppose
 +    * Government actions: Providing tax breaks and other incentives for companies who use AI responsibly: 40% support, 41% oppose. 
 +    * "The potential benefits of AI, such as increased efficiency and productivity, outweigh the potential job loss": 43% agree, 42% disagree. 
 +    * "AI will create new jobs and opportunities to make up for the jobs that are lost": 39% agree, 39% disagree. 
 +    * "The government should take action to prevent the potential loss of jobs due to AI": 63% agree, 26% disagree. 
 +    * [[https://www.ipsos.com/sites/default/files/ct/news/documents/2023-04/Topline%20Ipsos%20Consumer%20Tracker%20Wave%2074%20JS%20Proof.pdf#page=18|A question on risks from AI]].
   * [[https://www.ipsos.com/en-us/reutersipsos-issues-survey-may-2023|Reuters/Ipsos]] (May 15, 2023) (online survey, 4415 responses)   * [[https://www.ipsos.com/en-us/reutersipsos-issues-survey-may-2023|Reuters/Ipsos]] (May 15, 2023) (online survey, 4415 responses)
     * "Just under seven in ten Americans say they are concerned about the increased use of artificial intelligence (AI) (68%). About two in three Americans say that AI will have unpredictable consequences that we ultimately won’t be able to control (67%), and half (52%) say AI is bad for humanity. Three in five Americans believe the uncontrollable effects of AI could be so harmful that it would risk humankind’s future (61%), a similar percentage say that companies replacing workers with AI should pay a penalty to offset the increased unemployment (62%)."     * "Just under seven in ten Americans say they are concerned about the increased use of artificial intelligence (AI) (68%). About two in three Americans say that AI will have unpredictable consequences that we ultimately won’t be able to control (67%), and half (52%) say AI is bad for humanity. Three in five Americans believe the uncontrollable effects of AI could be so harmful that it would risk humankind’s future (61%), a similar percentage say that companies replacing workers with AI should pay a penalty to offset the increased unemployment (62%)."
Line 114: Line 143:
     * 54% AI poses a danger to humanity vs 31% AI will benefit humanity.     * 54% AI poses a danger to humanity vs 31% AI will benefit humanity.
   * [[https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_May-19-22-2023_National_Topline_May-26-Release.pdf|Fox News]] (May 22, 2023) (phone survey of registered voters, 1001 responses)   * [[https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_May-19-22-2023_National_Topline_May-26-Release.pdf|Fox News]] (May 22, 2023) (phone survey of registered voters, 1001 responses)
-    * +    * Concern: 25% extremely, 31% very, 34% not very, 8% not at all. 
 +    * Common open-ended reactions to AI are "Afraid / Scared / Dangerous" (16%), "Do not like it / General negative / Bad idea" (11%), and "Concern / Hesitancy / Distrust / Caution" (8%). 
 +    * "Thinking about the next few years, how much do you think artificial intelligence will change the way we live in the United States": 43% a lot, 43% some, 9% not much, 3% not at all.
   * [[https://today.yougov.com/topics/politics/articles-reports/2023/05/25/americans-are-divided-artificial-intelligence-poll|The Economist / YouGov]] (May 23, 2023) (online survey, 1500 responses)   * [[https://today.yougov.com/topics/politics/articles-reports/2023/05/25/americans-are-divided-artificial-intelligence-poll|The Economist / YouGov]] (May 23, 2023) (online survey, 1500 responses)
     * Regulation: 35% AI should be heavily regulated by government, 37% somewhat regulated, 8% not regulated.     * Regulation: 35% AI should be heavily regulated by government, 37% somewhat regulated, 8% not regulated.
Line 123: Line 154:
     * Worry: 1% nearly all the time, 7% a lot, 24% a fair amount, 44% only a little bit, 24% not at all (vs 1%, 7%, 21%, 44%, and 27%, respectively, in April).     * Worry: 1% nearly all the time, 7% a lot, 24% a fair amount, 44% only a little bit, 24% not at all (vs 1%, 7%, 21%, 44%, and 27%, respectively, in April).
   * [[https://docs.cdn.yougov.com/1440vr47wk/crosstabs_Views%20on%20AI.pdf|YouGov: Views on AI]] (Jun 5, 2023) (online survey, 2000 responses)   * [[https://docs.cdn.yougov.com/1440vr47wk/crosstabs_Views%20on%20AI.pdf|YouGov: Views on AI]] (Jun 5, 2023) (online survey, 2000 responses)
-    * //See 35615, and 16.//+    * Effects on society: 29% positive35% negative25% neither. 
 +    * 12% AI will increase jobs available in the US54% decrease14% no effect. 
 +    * "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war": 25% strongly agree, 30% somewhat agree, 12% somewhat disagree, 6% strongly disagree. 
 +    * A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
   * [[https://www.ipsos.com/en-us/few-americans-trust-companies-developing-ai-systems-do-so-responsibly|Ipsos]] (Jun 18, 2023) (online survey, 1023 responses)   * [[https://www.ipsos.com/en-us/few-americans-trust-companies-developing-ai-systems-do-so-responsibly|Ipsos]] (Jun 18, 2023) (online survey, 1023 responses)
     * 15% "trust the groups and companies developing AI systems to do so responsibly," 83% don't trust.     * 15% "trust the groups and companies developing AI systems to do so responsibly," 83% don't trust.
     * 27% "trust the federal government to regulate AI," 72% don't trust.     * 27% "trust the federal government to regulate AI," 72% don't trust.
-  * [[https://harvardharrispoll.com/wp-content/uploads/2023/07/HHP_July2023_KeyResults.pdf#page=59|Harvard CAPS / Harris]] (Jul 20, 2023) (online survey, 2068 responses) +  * [[https://harvardharrispoll.com/wp-content/uploads/2023/07/HHP_July2023_KeyResults.pdf#page=59|Harvard CAPS / Harris]] (Jul 20, 2023) (online survey of registered voters, 2068 responses) 
-    *  +    * 70% AI will bring a new revolution in technology; 30% it will not make much difference. 
-  * AI Policy Institute / YouGov ([[https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/|1]], [[https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/|2]]) (Jul 21, 2023) (online survey, 1001 responses) (note that the AI Policy Institute is a pro-caution advocacy organization) +    * 74% AI can be dangerous; 26% fears are overblown. 
-    * //See 1.11.41.6–141.181.19, 1.22, and 2.1–8.//+    * Out of five possible risks of AI, respondents see as most real spreading misinformation (79%) and creating massive fraud (75%). 
 +    * Out of five possible benefits of AI, respondents see as most real robots that help people (69%). 
 +    * 54% fearful, 46% optimistic (after reading about potential risks and benefits). 
 +    * 37% AI will add jobs, 63% destroy jobs. 
 +    * Regulation: 79% we need more, 21% less. 
 +  * AI Policy Institute / YouGov ([[https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/|1]], [[https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/|2]]) (Jul 21, 2023) (online survey of voters, 1001 responses) (note that the AI Policy Institute is a pro-caution advocacy organization) 
 +    * 21% excited, 62% concerned, 16% neutral. 
 +    * A federal agency regulating the use of artificial intelligence: 56% support14% oppose. 
 +    * Given 7 possible risks from AIrespondents are most concerned about "AI being used to hack your personal data" (81%). 
 +    * 82% "We should go slowly and deliberately," 8% "We should speed up development" (after reading brief arguments). 
 +    * 60% "AI will undermine meaning in our lives"; 19% enhance meaning (after reading brief arguments). 
 +    * 62% "AI will make people dumber"; 17% smarter (after reading brief arguments). 
 +    * "It would be a good thing if AI progress was stopped or significantly slowed": 62% agree26% disagree. 
 +    * "I feel afraid of artificial intelligence": 55% agree, 39% disagree. 
 +    * "Tech company executives can’t be trusted to self-regulate the AI industry": 82% agree, 13% disagree. 
 +    * "AI could accidentally cause a catastrophic event": 28% extremely likely, 25% very, 33% somewhat, 7% not at all. 
 +    * "Threat to the existence of the human race": 24% extremely concerned, 19% very33% somewhat, 18% not at all. 
 +    * 77% the next 2 years of AI progress will be faster6% slower. 
 +    * AGI: 54% within 5 years, 24% more than 5 years. 
 +    * 72% "We should slow down the development and deployment of artificial intelligence," 12% "We should more quickly develop and deploy artificial intelligence.
 +    * "A legally enforced pause on advanced artificial intelligence research": 49% support, 25% oppose. 
 +    * "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war": 70% agree, 11% disagree. 
 +    * Policy proposal: "any advanced AI model should be required to demonstrate safety before they are released": 65% support, 11% oppose (after reading brief arguments). 
 +    * Policy proposal: "any organization producing advanced AI models must require a license, [] all advanced models must be evaluated for safety, and [] all models must be audited by independent experts": 64% support, 13% oppose (after reading brief arguments). 
 +    * Policy proposal: regulate supply chain for specialized chips: 56% support, 18% oppose (after reading brief arguments). 
 +    * Policy proposal: "require all AI generated images to contain proof that they were generated by a computer": 76% support, 9% oppose (after reading brief arguments). 
 +    * Policy proposal: international regulation of military AI: 60% support, 17% oppose (after reading brief arguments). 
 +    * 58% prefer thorough regulation vs 15% no regulation (after reading brief arguments). 
 +    * 41% prefer international regulation vs 24% national regulation (after reading brief arguments). 
 +    * 57% prefer government regulation vs 18% industry self-regulation (after reading brief arguments).
   * [[https://www.ipsos.com/en-us/most-americans-want-tech-companies-commit-ai-safeguards|Ipsos]] (Jul 24, 2023) (online survey, 1000 responses)   * [[https://www.ipsos.com/en-us/most-americans-want-tech-companies-commit-ai-safeguards|Ipsos]] (Jul 24, 2023) (online survey, 1000 responses)
     * "AI companies committing to internal and external security testing before their release": 77% support, 14% oppose.     * "AI companies committing to internal and external security testing before their release": 77% support, 14% oppose.
Line 144: Line 207:
     * White House voluntary commitments: "Following these commitments to responsibly develop AI makes a positive future with AI more likely": 69% agree, 19% disagree.     * White House voluntary commitments: "Following these commitments to responsibly develop AI makes a positive future with AI more likely": 69% agree, 19% disagree.
   * [[https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/|Pew: Growing public concern about the role of artificial intelligence in daily life]] (Aug 6, 2023) (online survey, 11201 responses)   * [[https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/|Pew: Growing public concern about the role of artificial intelligence in daily life]] (Aug 6, 2023) (online survey, 11201 responses)
-    * 52% more concerned than excited vs 10% more excited than concerned (vs 38% and 15% respectively in 2022).+    * 52% concerned vs 10% excited (vs 38% and 15% respectively in 2022)
 +  * [[https://www.axios.com/2023/09/11/poll-ai-elections-axios-morning-consult|Axios / Morning Consult]] (Aug 13, 2023) (online survey, 2203 responses) 
 +    * Enthusiasm: 16% very, 25% somewhat, 28% not too, 30% not at all. 
 +    * Concern: 31% very, 35% somewhat, 14% not too, 8% not at all. 
 +    * 34% Humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart. 
 +    * "Do you believe there is a threshold in the evolution of AI after which humans cannot take back control of the artificial intelligence they've created?": 21% "yes, definitely"; 44% "yes, probably"; 22% "no, probably not"; 13% "no, definitely not." Of those who say yes, median time is "next five years," the second-shortest of five options. 
 +    * Optimism about where society will land with AI: 26% optimistic, 37% pessimistic, 37% neither.
   * [[https://docs.cdn.yougov.com/cx8wt27o2r/AI%20Technologies_poll_results.pdf|YouGov]] (Aug 20, 2023) (online survey, 1000 responses)   * [[https://docs.cdn.yougov.com/cx8wt27o2r/AI%20Technologies_poll_results.pdf|YouGov]] (Aug 20, 2023) (online survey, 1000 responses)
     * Types of AI that first come to mind are robots (20%), text generation tools (13%), chatbots (10%), and virtual personal assistants (9%).     * Types of AI that first come to mind are robots (20%), text generation tools (13%), chatbots (10%), and virtual personal assistants (9%).
     * Effects of AI on society: 20% positive, 27% equally positive and negative, 37% negative.     * Effects of AI on society: 20% positive, 27% equally positive and negative, 37% negative.
 +  * [[https://theaipi.org/poll-shows-voters-oppose-open-sourcing-ai-models-support-regulatory-representation-on-boards-and-say-ai-risks-outweigh-benefits/|AI Policy Institute / YouGov Blue]] (Sep 6, 2023) (online survey of voters, 1118 responses) (note that the AI Policy Institute is a pro-caution advocacy organization) 
 +    * 23% "we should open source powerful AI models," 47% we should not (after reading brief arguments). 
 +    * 71% prefer caution to reduce risks, 29% prefer avoiding too much caution to see benefits. 
 +  * [[https://docs.cdn.yougov.com/531jxljmmg/Concerns%20about%20AI_poll_results.pdf|YouGov]] (Aug 27, 2023) (online survey of citizens, 1000 responses) 
 +    * Effects of AI on society: 15% positive, 40% negative, 29% equally positive and negative. 
 +    * Biggest concern is "The spread of misleading video and audio deep fakes" (85% concerned). 
 +    * AI should be much more regulated (52%), be somewhat more regulated (26%), have no change in regulation (6%), be somewhat less regulated (1%), be much less regulated (2%). 
 +    * A six-month pause on some kinds of AI development: 36% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose (similar to April and June). 
 +  * [[https://twitter.com/DataProgress/status/1701973967303565753|Data for Progress]] (Sep 9, 2023) (online survey of likely voters, 1191 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results) 
 +    * 12% AI will increase jobs in the US, 60% decrease, 13% no effect. 
 +    * "Creating a dedicated federal agency to regulate AI": 62% support, 27% oppose (after reading a neutral description) (identical to their results from March). 
 +  * [[https://news.gallup.com/opinion/gallup/510635/three-four-americans-believe-reduce-jobs.aspx|Gallup]] (Sep 13, 2023) (online survey, 5458 responses) 
 +  * [[https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll|Artificial Intelligence Policy Institute]] (Sep, 2023) (Online poll, 1118 responses) 
 +  * [[https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll|American Psychological Association]] (Sep, 2023) (online poll, 2515 responses) 
 +  * [[https://www.sentienceinstitute.org/aims-survey-2023|Sentience Institute]] (Sep, 2023) (1169 responses) 
 +  * [[https://www.gov.uk/government/publications/international-survey-of-public-opinion-on-ai-safety|Deltapoll]] (Oct 27, 2023) (online survey, 1126 US responses) 
 +  * [[https://static1.squarespace.com/static/631d02b2dfa9482a32db47ec/t/6556228ccd929249f767a65c/1700143757657/Participatory+AI+Risk+Prioritization+%7C+CIP.pdf|Collective Intelligence Project]] (Oct, 2023) (online survey, 1000 responses) 
 +  * [[https://theaipi.org/poll-biden-ai-executive-order-10-30/|Artificial Intelligence Policy Institute]] (Oct, 2023) (1132 US registered voters)
  
 //The rest of this page up to "Demographic analysis" is quite out of date.// //The rest of this page up to "Demographic analysis" is quite out of date.//
Line 169: Line 255:
 ==== Open questions ==== ==== Open questions ====
  
-An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear "artificial intelligence" or "machine learning." (For example, [[https://blumbergcapital.com/ai-in-2019/|perhaps]] Americans who hear "artificial intelligence" mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, "data" & surveillance & privacy, "algorithms" & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on [[https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/public-opinion-on-ai-governance.html#subsecgovchallenges13|AI governance challenges]] for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, "real" intelligence, etc.+An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear "artificial intelligence" or "machine learning." (For example, [[https://web.archive.org/web/20230207112738/https://blumbergcapital.com/ai-in-2019/|perhaps]] Americans who hear "artificial intelligence" mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, "data" & surveillance & privacy, "algorithms" & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on [[https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/public-opinion-on-ai-governance.html#subsecgovchallenges13|AI governance challenges]] for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, "real" intelligence, etc.
  
 ==== Demographic analysis ==== ==== Demographic analysis ====
Line 177: Line 263:
 ==== Other surveys ==== ==== Other surveys ====
  
-YouGov's [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it "would cause the end of the human race on Earth") and 5th in terms of respondents' concern (46% concerned that it "will cause the end of the human race on Earth"). Rethink Priorities's [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|US public opinion of AI policy and risk]] (2023) asked a similar question, giving respondents a list of 5 possible threats plus "Some other cause." AI ranked last, with just 4% choosing it as most likely to cause human extinction.+YouGov's [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it "would cause the end of the human race on Earth") and 5th in terms of respondents' concern (46% concerned that it "will cause the end of the human race on Earth"). Rethink Priorities's [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|US public opinion of AI policy and risk]] (2023) asked a similar question, giving respondents a list of 5 possible threats plus "Some other cause." AI ranked last, with just 4% choosing it as most likely to cause human extinction. [[https://www.publicfirst.co.uk/wp-content/uploads/2023/03/Public-First-Poll-on-Artificial-Intellignce-USA.pdf|Public First]] asked about 7 potential dangers over the next 50 years; AI was 6th on worry (56% worried) and 7th on "risk that it could lead to a breakdown in human civilization" (28% think there is a real risk). [[https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_May-19-22-2023_National_Topline_May-26-Release.pdf|Fox News]] asked about concern about 16 issues; AI was 13th (56% concerned).
  
 Alexia Georgiadis's [[https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk|The Effectiveness of AI Existential Risk Communication to the American and Dutch Public]] (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's [[https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-item|AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results]] and [[https://forum.effectivealtruism.org/posts/EoqeJCBiuJbMTKfPZ/unveiling-the-american-public-opinion-on-ai-moratorium-and|Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure]] (2023). Note that the Existential Risk Observatory is a pro-caution advocacy organization. Alexia Georgiadis's [[https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk|The Effectiveness of AI Existential Risk Communication to the American and Dutch Public]] (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's [[https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-item|AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results]] and [[https://forum.effectivealtruism.org/posts/EoqeJCBiuJbMTKfPZ/unveiling-the-american-public-opinion-on-ai-moratorium-and|Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure]] (2023). Note that the Existential Risk Observatory is a pro-caution advocacy organization.
Line 185: Line 271:
  
  
-Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including [[https://www.brookings.edu/blog/techtank/2018/08/29/brookings-survey-finds-divided-views-on-artificial-intelligence-for-warfare-but-support-rises-if-adversaries-are-developing-it/|Brookings]] (2018c), Northeastern/Gallup's [[https://www.northeastern.edu/gallup/pdf/Northeastern_Gallup_AI_2019.pdf|Facing the Future]] (2019), and Pew's [[https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/|AI in Hiring and Evaluating Workers]] and [[https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/|A majority of Americans have heard of ChatGPT, but few have tried it themselves]] (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's [[https://www.pewinternet.org/wp-content/uploads/sites/9/2017/10/PI_2017.10.04_Automation_FINAL.pdf|Automation in Everyday Life]] (2017).+Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including [[https://www.brookings.edu/blog/techtank/2018/08/29/brookings-survey-finds-divided-views-on-artificial-intelligence-for-warfare-but-support-rises-if-adversaries-are-developing-it/|Brookings]] (2018c), Northeastern/Gallup's [[https://www.northeastern.edu/gallup/pdf/Northeastern_Gallup_AI_2019.pdf|Facing the Future]] (2019), and Pew's [[https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/|AI in Hiring and Evaluating Workers]] and [[https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/|A majority of Americans have heard of ChatGPT, but few have tried it themselves]] (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's [[https://www.pewinternet.org/wp-content/uploads/sites/9/2017/10/PI_2017.10.04_Automation_FINAL.pdf|Automation in Everyday Life]] (2017) and Gallup's [[https://news.gallup.com/poll/510551/workers-fear-technology-making-jobs-obsolete.aspx|More U.S. Workers Fear Technology Making Their Jobs Obsolete]] (2023).
  
 On the history of AI in the mass media, see [[https://ojs.aaai.org/index.php/AAAI/article/view/10635|Fast and Horvitz 2017]], [[https://link.springer.com/article/10.1007/s00146-020-00965-5|Ouchchy et al. 2020]], [[https://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_162.pdf|Chuan et al. 2019]], and [[http://39.103.203.133/pubs/2021/03/11/e4acd4b6-2d1c-4f0d-bba0-ddec00606d76.pdf|Zhai et al. 2020]]. On the history of AI in the mass media, see [[https://ojs.aaai.org/index.php/AAAI/article/view/10635|Fast and Horvitz 2017]], [[https://link.springer.com/article/10.1007/s00146-020-00965-5|Ouchchy et al. 2020]], [[https://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_162.pdf|Chuan et al. 2019]], and [[http://39.103.203.133/pubs/2021/03/11/e4acd4b6-2d1c-4f0d-bba0-ddec00606d76.pdf|Zhai et al. 2020]].
  
-We are uncertain about the quality of [[https://assets.website-files.com/59c269cb7333f20001b0e7c4/59db4483aaa78100013fa85a_Sex_lies_and_AI-SYZYGY-Digital_Insight_Report_2017_US.pdf|SYZYGY 2017]], [[https://blumbergcapital.com/ai-in-2019/|Blumberg Capital 2019]], and [[https://jasonjones.ninja/jones-skiena-public-opinion-of-ai/|Jones-Skiena 2020--2022]], [[https://www.campaignforaisafety.org/usa-ai-x-risk-perception-tracker/|Campaign for AI Safety 2023]].+We are uncertain about the quality of [[https://assets.website-files.com/59c269cb7333f20001b0e7c4/59db4483aaa78100013fa85a_Sex_lies_and_AI-SYZYGY-Digital_Insight_Report_2017_US.pdf|SYZYGY 2017]], [[https://web.archive.org/web/20230207112738/https://blumbergcapital.com/ai-in-2019/|Blumberg Capital 2019]], [[https://jasonjones.ninja/jones-skiena-public-opinion-of-ai/|Jones-Skiena 2020--2022]], and Campaign for AI Safety ([[https://www.campaignforaisafety.org/public-opinion/|2023a]], [[https://www.campaignforaisafety.org/usa-ai-x-risk-perception-tracker/2023b]]) (note that Campaign for AI Safety is a pro-caution advocacy organization).
  
 //Author: Zach Stein-Perlman// //Author: Zach Stein-Perlman//
responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.1694380336.txt.gz · Last modified: 2023/09/10 21:12 by zachsteinperlman