User Tools

Site Tools


responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai [2023/09/11 00:21]
zachsteinperlman [List of surveys]
responses_to_ai:public_opinion_on_ai:surveys_of_public_opinion_on_ai:surveys_of_us_public_opinion_on_ai [2024/01/09 20:53]
harlanstewart
Line 1: Line 1:
 +/*
 +EDITOR NOTES (publicly accessible)
 +
 +-Harlan: this page needs work. Need to add details to some of the items at the end of the list, and update the discussion part after the list
 +*/
 ====== Surveys of US public opinion on AI ====== ====== Surveys of US public opinion on AI ======
  
-//Published 30 November 2022; last updated 10 September 2023//+//Published 30 November 2022; last updated 9 January 2024//
  
-We are aware of 33 high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI or policy responses. Methodologies, questions, and responses vary.+We are aware of 38 high-quality surveys of the American public's beliefs and attitudes on the future capabilities and outcomes of AI or policy responses. Methodologies, questions, and responses vary.
  
 ===== Details ===== ===== Details =====
Line 53: Line 58:
     * Respondents are more skeptical of "Increased economic prosperity" as a positive outcome of AI than the other eleven positive outcomes listed.     * Respondents are more skeptical of "Increased economic prosperity" as a positive outcome of AI than the other eleven positive outcomes listed.
   * [[https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/|Pew: AI and Human Enhancement]] (2021, published 2022) (online survey, 10260 responses)   * [[https://www.pewresearch.org/internet/2022/03/17/ai-and-human-enhancement-americans-openness-is-tempered-by-a-range-of-concerns/|Pew: AI and Human Enhancement]] (2021, published 2022) (online survey, 10260 responses)
-    * "The increased use of artificial intelligence computer programs in daily life makes them feel": 18% more excited than concerned, 45% equally excited and concerned, 37% more concerned than excited.+    * "The increased use of artificial intelligence computer programs in daily life makes them feel": 18% excited, 37% concerned, 45% equally excited and concerned.
     * Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.     * Mixed attitudes on technologies described as applications of AI, such as social media algorithms and gene editing.
   * [[https://www.mitre.org/sites/default/files/2023-02/PR-23-0454-MITRE-Harris-Poll-Survey-on-AI-Trends_0.pdf|MITRE/Harris]] (2022, published 2023) (online survey, 2050 responses)   * [[https://www.mitre.org/sites/default/files/2023-02/PR-23-0454-MITRE-Harris-Poll-Survey-on-AI-Trends_0.pdf|MITRE/Harris]] (2022, published 2023) (online survey, 2050 responses)
Line 79: Line 84:
     * Unemployment: 52% AI will increase unemployment, 11% decrease, 21% neither.     * Unemployment: 52% AI will increase unemployment, 11% decrease, 21% neither.
     * 55% "governments should try to prevent human jobs from being taken over by AIs or robots," 29% should not.     * 55% "governments should try to prevent human jobs from being taken over by AIs or robots," 29% should not.
-    * //See 145–152 and 156–164.//+    * Given 7 potential dangers over the next 50 years, AI is 6th on worry (56% worried) and 7th on "risk that it could lead to a breakdown in human civilization" (28% think there is a real risk). 
 +    * Given 7 potential risks from advanced AI, increasing unemployment is seen as the most important. 
 +    * 11% we should accelerate AI development, 33% slow, 39% continue around the same pace. 
 +    * Potential policy: "Banning new research into AI": 24% good idea, 48% bad idea. 
 +    * Potential policy: "Increasing government funding of AI research": 31% good idea, 41% bad idea. 
 +    * Potential policy: "Creating a new government regulatory agency similar to the Food and Drug Administration (FDA) to regulate the use of new AI models": 50% good idea, 23% bad idea. 
 +    * Potential policy: "Introducing a new tax on the use of AI models": 33% good idea, 35% bad idea. 
 +    * Potential policy: "Increasing the use of AI in the school curriculum": 31% good idea, 43% bad idea. 
 +    * Risk that an advanced AI causes humanity to go extinct in the next hundred years: median 1% (given six options, of which 1% was third-lowest). 
 +    * For respondents who said less than 1% on the above, the most common reason was "human civilization is more likely to be destroyed by other factors (eg climate change, nuclear war)" (53%).
   * [[https://www.filesforprogress.org/datasets/2023/5/dfp_chatgpt_ai_regulation_tabs.pdf|Data for Progress: Voters Are Concerned About ChatGPT and Support More Regulation of AI]] (Mar 24, 2023) (online survey of likely voters, 1194 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results)   * [[https://www.filesforprogress.org/datasets/2023/5/dfp_chatgpt_ai_regulation_tabs.pdf|Data for Progress: Voters Are Concerned About ChatGPT and Support More Regulation of AI]] (Mar 24, 2023) (online survey of likely voters, 1194 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results)
     * 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.     * 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
Line 85: Line 99:
     * 66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.     * 66% agree with a pro-antitrust message, 23% agree with an anti-antitrust message.
     * "A law that requires companies to be transparent about the data they use to train their AI": 79% support, 11% oppose (after reading a pro-transparency description).     * "A law that requires companies to be transparent about the data they use to train their AI": 79% support, 11% oppose (after reading a pro-transparency description).
-  * YouGov ([[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/1|1]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/2|2]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/3|3]]) (Apr 3, 2023) (online survey, 20810 responses) (see also YouGov's follow-up, [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]])+  * YouGov ([[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/1|1]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/2|2]], [[https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/3|3]]) (Apr 3, 2023) (online survey, 20810 responses)
     * 6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.     * 6% say AI is more intelligent than people, 57% AI is likely to eventually become smarter, 23% not likely.
     * 69% support "a six-month pause on some kinds of AI development"; 13% oppose (but they were asked after reading a pro-pause message).     * 69% support "a six-month pause on some kinds of AI development"; 13% oppose (but they were asked after reading a pro-pause message).
     * AI causing human extinction: 46% concerned, 40% not.     * AI causing human extinction: 46% concerned, 40% not.
 +  * [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (Apr 11, 2023) (online survey of citizens, 1000 responses) (this is a follow-up to the previous survey with some modified questions)
 +    * AI causing human extinction: 46% concerned, 47% not.
 +    * A six-month pause on some kinds of AI development: 58% support, 23% oppose.
   * [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|Rethink Priorities: US public opinion of AI policy and risk]] (Apr 14, 2023) (online survey, 2444 responses) (note that Rethink Priorities may have an agenda and whether it releases a survey may depend on its results)   * [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|Rethink Priorities: US public opinion of AI policy and risk]] (Apr 14, 2023) (online survey, 2444 responses) (note that Rethink Priorities may have an agenda and whether it releases a survey may depend on its results)
     * Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).     * Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
Line 144: Line 161:
     * 15% "trust the groups and companies developing AI systems to do so responsibly," 83% don't trust.     * 15% "trust the groups and companies developing AI systems to do so responsibly," 83% don't trust.
     * 27% "trust the federal government to regulate AI," 72% don't trust.     * 27% "trust the federal government to regulate AI," 72% don't trust.
-  * [[https://harvardharrispoll.com/wp-content/uploads/2023/07/HHP_July2023_KeyResults.pdf#page=59|Harvard CAPS / Harris]] (Jul 20, 2023) (online survey, 2068 responses)+  * [[https://harvardharrispoll.com/wp-content/uploads/2023/07/HHP_July2023_KeyResults.pdf#page=59|Harvard CAPS / Harris]] (Jul 20, 2023) (online survey of registered voters, 2068 responses)
     * 70% AI will bring a new revolution in technology; 30% it will not make much difference.     * 70% AI will bring a new revolution in technology; 30% it will not make much difference.
     * 74% AI can be dangerous; 26% fears are overblown.     * 74% AI can be dangerous; 26% fears are overblown.
Line 152: Line 169:
     * 37% AI will add jobs, 63% destroy jobs.     * 37% AI will add jobs, 63% destroy jobs.
     * Regulation: 79% we need more, 21% less.     * Regulation: 79% we need more, 21% less.
-  * AI Policy Institute / YouGov ([[https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/|1]], [[https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/|2]]) (Jul 21, 2023) (online survey, 1001 responses) (note that the AI Policy Institute is a pro-caution advocacy organization)+  * AI Policy Institute / YouGov ([[https://theaipi.org/poll-shows-overwhelming-concern-about-risks-from-ai-as-new-institute-launches-to-understand-public-opinion-and-advocate-for-responsible-ai-policies/|1]], [[https://theaipi.org/poll-shows-voters-want-rules-on-deep-fakes-international-standards-and-other-ai-safeguards/|2]]) (Jul 21, 2023) (online survey of voters, 1001 responses) (note that the AI Policy Institute is a pro-caution advocacy organization)
     * 21% excited, 62% concerned, 16% neutral.     * 21% excited, 62% concerned, 16% neutral.
     * A federal agency regulating the use of artificial intelligence: 56% support, 14% oppose.     * A federal agency regulating the use of artificial intelligence: 56% support, 14% oppose.
Line 190: Line 207:
     * White House voluntary commitments: "Following these commitments to responsibly develop AI makes a positive future with AI more likely": 69% agree, 19% disagree.     * White House voluntary commitments: "Following these commitments to responsibly develop AI makes a positive future with AI more likely": 69% agree, 19% disagree.
   * [[https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/|Pew: Growing public concern about the role of artificial intelligence in daily life]] (Aug 6, 2023) (online survey, 11201 responses)   * [[https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life/|Pew: Growing public concern about the role of artificial intelligence in daily life]] (Aug 6, 2023) (online survey, 11201 responses)
-    * 52% more concerned than excited vs 10% more excited than concerned (vs 38% and 15% respectively in 2022).+    * 52% concerned vs 10% excited (vs 38% and 15% respectively in 2022)
 +  * [[https://www.axios.com/2023/09/11/poll-ai-elections-axios-morning-consult|Axios / Morning Consult]] (Aug 13, 2023) (online survey, 2203 responses) 
 +    * Enthusiasm: 16% very, 25% somewhat, 28% not too, 30% not at all. 
 +    * Concern: 31% very, 35% somewhat, 14% not too, 8% not at all. 
 +    * 34% Humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart. 
 +    * "Do you believe there is a threshold in the evolution of AI after which humans cannot take back control of the artificial intelligence they've created?": 21% "yes, definitely"; 44% "yes, probably"; 22% "no, probably not"; 13% "no, definitely not." Of those who say yes, median time is "next five years," the second-shortest of five options. 
 +    * Optimism about where society will land with AI: 26% optimistic, 37% pessimistic, 37% neither.
   * [[https://docs.cdn.yougov.com/cx8wt27o2r/AI%20Technologies_poll_results.pdf|YouGov]] (Aug 20, 2023) (online survey, 1000 responses)   * [[https://docs.cdn.yougov.com/cx8wt27o2r/AI%20Technologies_poll_results.pdf|YouGov]] (Aug 20, 2023) (online survey, 1000 responses)
     * Types of AI that first come to mind are robots (20%), text generation tools (13%), chatbots (10%), and virtual personal assistants (9%).     * Types of AI that first come to mind are robots (20%), text generation tools (13%), chatbots (10%), and virtual personal assistants (9%).
     * Effects of AI on society: 20% positive, 27% equally positive and negative, 37% negative.     * Effects of AI on society: 20% positive, 27% equally positive and negative, 37% negative.
 +  * [[https://theaipi.org/poll-shows-voters-oppose-open-sourcing-ai-models-support-regulatory-representation-on-boards-and-say-ai-risks-outweigh-benefits/|AI Policy Institute / YouGov Blue]] (Sep 6, 2023) (online survey of voters, 1118 responses) (note that the AI Policy Institute is a pro-caution advocacy organization) 
 +    * 23% "we should open source powerful AI models," 47% we should not (after reading brief arguments). 
 +    * 71% prefer caution to reduce risks, 29% prefer avoiding too much caution to see benefits. 
 +  * [[https://docs.cdn.yougov.com/531jxljmmg/Concerns%20about%20AI_poll_results.pdf|YouGov]] (Aug 27, 2023) (online survey of citizens, 1000 responses) 
 +    * Effects of AI on society: 15% positive, 40% negative, 29% equally positive and negative. 
 +    * Biggest concern is "The spread of misleading video and audio deep fakes" (85% concerned). 
 +    * AI should be much more regulated (52%), be somewhat more regulated (26%), have no change in regulation (6%), be somewhat less regulated (1%), be much less regulated (2%). 
 +    * A six-month pause on some kinds of AI development: 36% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose (similar to April and June). 
 +  * [[https://twitter.com/DataProgress/status/1701973967303565753|Data for Progress]] (Sep 9, 2023) (online survey of likely voters, 1191 responses) (note that Data for Progress may have an agenda and whether it releases a survey may depend on its results) 
 +    * 12% AI will increase jobs in the US, 60% decrease, 13% no effect. 
 +    * "Creating a dedicated federal agency to regulate AI": 62% support, 27% oppose (after reading a neutral description) (identical to their results from March). 
 +  * [[https://news.gallup.com/opinion/gallup/510635/three-four-americans-believe-reduce-jobs.aspx|Gallup]] (Sep 13, 2023) (online survey, 5458 responses) 
 +  * [[https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll|Artificial Intelligence Policy Institute]] (Sep, 2023) (Online poll, 1118 responses) 
 +  * [[https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll|American Psychological Association]] (Sep, 2023) (online poll, 2515 responses) 
 +  * [[https://www.sentienceinstitute.org/aims-survey-2023|Sentience Institute]] (Sep, 2023) (1169 responses) 
 +  * [[https://www.gov.uk/government/publications/international-survey-of-public-opinion-on-ai-safety|Deltapoll]] (Oct 27, 2023) (online survey, 1126 US responses) 
 +  * [[https://static1.squarespace.com/static/631d02b2dfa9482a32db47ec/t/6556228ccd929249f767a65c/1700143757657/Participatory+AI+Risk+Prioritization+%7C+CIP.pdf|Collective Intelligence Project]] (Oct, 2023) (online survey, 1000 responses) 
 +  * [[https://theaipi.org/poll-biden-ai-executive-order-10-30/|Artificial Intelligence Policy Institute]] (Oct, 2023) (1132 US registered voters)
  
 //The rest of this page up to "Demographic analysis" is quite out of date.// //The rest of this page up to "Demographic analysis" is quite out of date.//
Line 215: Line 255:
 ==== Open questions ==== ==== Open questions ====
  
-An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear "artificial intelligence" or "machine learning." (For example, [[https://blumbergcapital.com/ai-in-2019/|perhaps]] Americans who hear "artificial intelligence" mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, "data" & surveillance & privacy, "algorithms" & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on [[https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/public-opinion-on-ai-governance.html#subsecgovchallenges13|AI governance challenges]] for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, "real" intelligence, etc.+An important aspect of public opinion mostly neglected by these surveys is the applications and issues that come to mind when people hear "artificial intelligence" or "machine learning." (For example, [[https://web.archive.org/web/20230207112738/https://blumbergcapital.com/ai-in-2019/|perhaps]] Americans who hear "artificial intelligence" mostly just think about robots and self-driving cars.) Plausible candidates include robots, self-driving cars, automation, facial recognition, "data" & surveillance & privacy, "algorithms" & bias, social media recommender systems, autonomous weapons, and cyberattacks. See also GovAI 2018's section on [[https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/public-opinion-on-ai-governance.html#subsecgovchallenges13|AI governance challenges]] for what respondents say about given issues. Separately from specific AI applications, Americans may care about who creates or controls AI systems; the future of work; and whether AI systems have consciousness, common sense, "real" intelligence, etc.
  
 ==== Demographic analysis ==== ==== Demographic analysis ====
Line 223: Line 263:
 ==== Other surveys ==== ==== Other surveys ====
  
-YouGov's [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it "would cause the end of the human race on Earth") and 5th in terms of respondents' concern (46% concerned that it "will cause the end of the human race on Earth"). Rethink Priorities's [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|US public opinion of AI policy and risk]] (2023) asked a similar question, giving respondents a list of 5 possible threats plus "Some other cause." AI ranked last, with just 4% choosing it as most likely to cause human extinction. [[https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_May-19-22-2023_National_Topline_May-26-Release.pdf|Fox News]] asked about concern about 16 issues; AI was 13th (56% concerned).+YouGov's [[https://today.yougov.com/topics/technology/articles-reports/2023/04/14/ai-nuclear-weapons-world-war-humanity-poll|AI and the End of Humanity]] (2023) gave respondents a list of 9 possible threats. AI ranked 7th in terms of respondents' credence (44% think it's at least somewhat likely that it "would cause the end of the human race on Earth") and 5th in terms of respondents' concern (46% concerned that it "will cause the end of the human race on Earth"). Rethink Priorities's [[https://forum.effectivealtruism.org/posts/ConFiY9cRmg37fs2p/us-public-opinion-of-ai-policy-and-risk|US public opinion of AI policy and risk]] (2023) asked a similar question, giving respondents a list of 5 possible threats plus "Some other cause." AI ranked last, with just 4% choosing it as most likely to cause human extinction. [[https://www.publicfirst.co.uk/wp-content/uploads/2023/03/Public-First-Poll-on-Artificial-Intellignce-USA.pdf|Public First]] asked about 7 potential dangers over the next 50 years; AI was 6th on worry (56% worried) and 7th on "risk that it could lead to a breakdown in human civilization" (28% think there is a real risk). [[https://static.foxnews.com/foxnews.com/content/uploads/2023/05/Fox_May-19-22-2023_National_Topline_May-26-Release.pdf|Fox News]] asked about concern about 16 issues; AI was 13th (56% concerned).
  
 Alexia Georgiadis's [[https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk|The Effectiveness of AI Existential Risk Communication to the American and Dutch Public]] (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's [[https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-item|AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results]] and [[https://forum.effectivealtruism.org/posts/EoqeJCBiuJbMTKfPZ/unveiling-the-american-public-opinion-on-ai-moratorium-and|Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure]] (2023). Note that the Existential Risk Observatory is a pro-caution advocacy organization. Alexia Georgiadis's [[https://forum.effectivealtruism.org/posts/fqXLT7NHZGsLmjH4o/paper-summary-the-effectiveness-of-ai-existential-risk|The Effectiveness of AI Existential Risk Communication to the American and Dutch Public]] (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's [[https://forum.effectivealtruism.org/posts/YweBjDwgdco669H72/ai-x-risk-in-the-news-how-effective-are-recent-media-item|AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results]] and [[https://forum.effectivealtruism.org/posts/EoqeJCBiuJbMTKfPZ/unveiling-the-american-public-opinion-on-ai-moratorium-and|Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure]] (2023). Note that the Existential Risk Observatory is a pro-caution advocacy organization.
Line 231: Line 271:
  
  
-Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including [[https://www.brookings.edu/blog/techtank/2018/08/29/brookings-survey-finds-divided-views-on-artificial-intelligence-for-warfare-but-support-rises-if-adversaries-are-developing-it/|Brookings]] (2018c), Northeastern/Gallup's [[https://www.northeastern.edu/gallup/pdf/Northeastern_Gallup_AI_2019.pdf|Facing the Future]] (2019), and Pew's [[https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/|AI in Hiring and Evaluating Workers]] and [[https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/|A majority of Americans have heard of ChatGPT, but few have tried it themselves]] (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's [[https://www.pewinternet.org/wp-content/uploads/sites/9/2017/10/PI_2017.10.04_Automation_FINAL.pdf|Automation in Everyday Life]] (2017).+Some surveys are nominally about AI but are not focused on respondents' beliefs and attitudes on AI, including [[https://www.brookings.edu/blog/techtank/2018/08/29/brookings-survey-finds-divided-views-on-artificial-intelligence-for-warfare-but-support-rises-if-adversaries-are-developing-it/|Brookings]] (2018c), Northeastern/Gallup's [[https://www.northeastern.edu/gallup/pdf/Northeastern_Gallup_AI_2019.pdf|Facing the Future]] (2019), and Pew's [[https://www.pewresearch.org/internet/2023/04/20/ai-in-hiring-and-evaluating-workers-what-americans-think/|AI in Hiring and Evaluating Workers]] and [[https://www.pewresearch.org/short-reads/2023/05/24/a-majority-of-americans-have-heard-of-chatgpt-but-few-have-tried-it-themselves/|A majority of Americans have heard of ChatGPT, but few have tried it themselves]] (2023). There are also various surveys about automation which are mostly irrelevant to AI, such as Pew's [[https://www.pewinternet.org/wp-content/uploads/sites/9/2017/10/PI_2017.10.04_Automation_FINAL.pdf|Automation in Everyday Life]] (2017) and Gallup's [[https://news.gallup.com/poll/510551/workers-fear-technology-making-jobs-obsolete.aspx|More U.S. Workers Fear Technology Making Their Jobs Obsolete]] (2023).
  
 On the history of AI in the mass media, see [[https://ojs.aaai.org/index.php/AAAI/article/view/10635|Fast and Horvitz 2017]], [[https://link.springer.com/article/10.1007/s00146-020-00965-5|Ouchchy et al. 2020]], [[https://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_162.pdf|Chuan et al. 2019]], and [[http://39.103.203.133/pubs/2021/03/11/e4acd4b6-2d1c-4f0d-bba0-ddec00606d76.pdf|Zhai et al. 2020]]. On the history of AI in the mass media, see [[https://ojs.aaai.org/index.php/AAAI/article/view/10635|Fast and Horvitz 2017]], [[https://link.springer.com/article/10.1007/s00146-020-00965-5|Ouchchy et al. 2020]], [[https://www.aies-conference.com/2019/wp-content/papers/main/AIES-19_paper_162.pdf|Chuan et al. 2019]], and [[http://39.103.203.133/pubs/2021/03/11/e4acd4b6-2d1c-4f0d-bba0-ddec00606d76.pdf|Zhai et al. 2020]].
  
-We are uncertain about the quality of [[https://assets.website-files.com/59c269cb7333f20001b0e7c4/59db4483aaa78100013fa85a_Sex_lies_and_AI-SYZYGY-Digital_Insight_Report_2017_US.pdf|SYZYGY 2017]], [[https://blumbergcapital.com/ai-in-2019/|Blumberg Capital 2019]], and [[https://jasonjones.ninja/jones-skiena-public-opinion-of-ai/|Jones-Skiena 2020--2022]], [[https://www.campaignforaisafety.org/usa-ai-x-risk-perception-tracker/|Campaign for AI Safety 2023]].+We are uncertain about the quality of [[https://assets.website-files.com/59c269cb7333f20001b0e7c4/59db4483aaa78100013fa85a_Sex_lies_and_AI-SYZYGY-Digital_Insight_Report_2017_US.pdf|SYZYGY 2017]], [[https://web.archive.org/web/20230207112738/https://blumbergcapital.com/ai-in-2019/|Blumberg Capital 2019]], [[https://jasonjones.ninja/jones-skiena-public-opinion-of-ai/|Jones-Skiena 2020--2022]], and Campaign for AI Safety ([[https://www.campaignforaisafety.org/public-opinion/|2023a]], [[https://www.campaignforaisafety.org/usa-ai-x-risk-perception-tracker/2023b]]) (note that Campaign for AI Safety is a pro-caution advocacy organization).
  
 //Author: Zach Stein-Perlman// //Author: Zach Stein-Perlman//
responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.txt · Last modified: 2024/01/29 22:32 by harlanstewart