User Tools

Site Tools


arguments_for_ai_risk:list_of_possible_risks_from_ai

List of possible risks from AI

Created 26 September, 2023. Last updated 26 September, 2023.

Many people are concerned about AI for reasons other than existential risk. Here we list some of these other concerns.1)

Possible Risks

Misinformation

  • Harder to know what is real.2)
  • Deepfakes.3)
  • Widespread manipulation & scams.4)
  • Personalized phishing attacks.5)
  • Misleading voters, or undermining democracy in other ways.6)

Bias

  • AI systems exhibit biases, including for attributes which are legally protected against discrimination.7)
  • Unfair decisions in the justice system,8) hiring,9) etc.
  • Feedback loop which increases bias over time or encourages the bias to persist: the system is trained on biased data, then applied, which results in more disparate impact, so future systems trained on the resulting data are more biases.10)

Employment

  • Mass unemployment.11)
  • Loss of the sense of worth or dignity people derive from work.12)
  • Lossy automation where valuable parts of a process get forgotten.
  • Workers have less say in economic decisions.
  • Economic inequality increases.13)

Science

  • I would not be able to be on the forefront of knowledge.
  • All the smart people get sucked into working on AI, impoverishing our culture in other ways.
  • If making predictions becomes much easier relative to understanding the world, then science may shift its focus away from understanding.

Political Consolidation

  • Strengthening authoritarian control.14)
  • Extremely persuasive propaganda causes people to support terrible governments.15)
  • Predicting where opposition will arise or who opposes the government leads to an extremely efficient police state.16)
  • Facial recognition makes identifying and prosecuting protesters much easier.17)
  • Political inequality increases.
  • Big Tech gains more influence through propaganda or lobbying.

Widespread Access to Dangerous Things

  • Democratization of weapons designs.
  • Making bioterrorism easier.18)
  • Widespread hacking19) & cyber thefts.

Miscellaneous

  • Environmental cost of the electricity and water use of data centers.20)
  • Possibility of AI-run infrastructure being vulnerable to adversarial attacks.21)
  • Widespread socialization with chatbots makes building relationship between people harder.22)
  • Failure of institutions that do not keep up with changes.
  • Loss of meaning from being useful if AI does everything better than me.

Existential Risk (x-risk)

(Main article: Is AI an existential risk to humanity?)

  • Many thinkers believe advanced artificial intelligence (AI) poses a large threat to humanity's long term survival or flourishing.

Suffering Risk (s-risk)

  • Powerful which is aligned opposite to human values enslaves and tortures humans.23)
  • AI accelerates competition, leading to a Malthusian trap where everything of value is discarded to be more competitive.24)
  • Mind uploading appears attractive from the outside, but actually has negative value.

Moral Concern for AI

  • Mass servitude.25)
  • Normalization of slavery.26)
  • S-risk for the AI.

Overreaction

  • We give up on growth.
  • We become uncompetitive with yet-unseen alien civilizations.
  • A warning shot causes a Butlerian jihad, which leads to the collapse of civilization.
  • Getting locked into a suboptimal partial transhumanist future, or just a plain human-locked future.

Notes

1)
Many of these concerns were raised in response to a tweet by Katja Grace: https://twitter.com/KatjaGrace/status/1693873324747862370.
2)
Bartels. How to Tell If a Photo Is an AI-Generated Fake. Scientific American. (2023) https://www.scientificamerican.com/article/how-to-tell-if-a-photo-is-an-ai-generated-fake/.
3)
What are deepfakes - and how can you spot them? The Guardian. (2020) https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.
4)
Evans & Novak. Scammers use AI to mimic voices of loved ones in distress. CBS. (2023) https://www.cbsnews.com/news/scammers-ai-mimic-voices-loved-ones-in-distress/.
6)
Klepper & Swensen. AI-generated disinformation poses threat of misleading voters in 2024 election. PDS. (2023) https://www.pbs.org/newshour/politics/ai-generated-disinformation-poses-threat-of-misleading-voters-in-2024-election.
7)
Manyika, Silberg, & Presten. What Do We Do About the Biases in AI? Harvard Business Review. (2019) https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
8)
Dixon. Artificial Intelligence: Benefits and Unknown Risks. American Bar Association. (2021) https://www.americanbar.org/groups/judicial/publications/judges_journal/2021/winter/artificial-intelligence-benefits-and-unknown-risks/.
9)
Lawton. AI hiring bias: Everything you need to know. TechTarget. (2022) https://www.techtarget.com/searchhrsoftware/tip/AI-hiring-bias-Everything-you-need-to-know.
10)
Casacuberta. Bias in a Feedback Loop: Fuelling Algorithmic Injustice. CCCB Lab. (2018) https://lab.cccb.org/en/bias-in-a-feedback-loop-fuelling-algorithmic-injustice/.
11)
Vainilavičius. AI anxiety: the daunting prospect of mass unemployment. Cybernews. (2023) https://cybernews.com/editorial/ai-anxiety-grips-uncertain-future/.
12)
Dignity at work and the AI revolution. Trade Union Congress. (2021) https://www.tuc.org.uk/research-analysis/reports/dignity-work-and-ai-revolution.
13)
Lu. AI will increase inequality and raise tough questions about humanity, economists warn. The Conversation. (2023) https://theconversation.com/ai-will-increase-inequality-and-raise-tough-questions-about-humanity-economists-warn-203056.
14)
Shabaz. The Rise of Digital Authoritarianism. Freedom House. (2018) https://freedomhouse.org/report/freedom-net/2018/rise-digital-authoritarianism.
15)
Rizzuto. AI Propaganda Will Be Effective and Easily Accessible. Tech Policy Press. (2023) https://techpolicy.press/ai-propaganda-will-be-effective-and-easily-accessible/.
16)
Mozur, Xiao, & Liu. ‘An Invisible Cage’: How China Is Policing the Future. New York Times. (2022) https://www.nytimes.com/2022/06/25/technology/china-surveillance-police.html.
17)
Dizikes. How an “AI-tocracy” emerges. MIT News. (2023) https://news.mit.edu/2023/how-ai-tocracy-emerges-0713.
19)
Palisade Research. (Accessed September 20, 2023) https://palisaderesearch.org/.
20)
Dolby. Artificial Intelligence Can Make Companies Greener, but It Also Guzzles Energy. Wall Street Journal. (2023) https://www.wsj.com/articles/artificial-intelligence-can-make-companies-greener-but-it-also-guzzles-energy-7c7b678.
21)
Comiter. Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It. Belfer Center for Science and International Affairs, Harvard Kennedy School. (2019) https://www.belfercenter.org/publication/AttackingAI.
22)
Collins. Could AI do more harm than good to relationships? Deseret News. (2023) https://www.deseret.com/2023/9/6/23841752/ai-artificial-intelligence-chatgpt-relationships-real-life.
23)
Baumann. Focus areas of worst-case AI safety. Reducing Risks of Future Suffering. (2017) https://s-risks.org/focus-areas-of-worst-case-ai-safety/.
24)
Pethokoukis. A Nobel Laureate Economist Explains How AI Could Bring Back the Age of Malthus. American Enterprise Institute. (2018) https://www.aei.org/economics/a-nobel-laureate-economist-explains-how-ai-could-bring-back-the-age-of-malthus/.
25)
Milinkovic. The Moral and Legal Status of Artificial Intelligence (Present Dilemmas and Future Challenges). Sciendo. (2021) https://sciendo.com/article/10.2478/law-2021-0004.
26)
Dowdeswell & Goltz. The ethical concerns over enslaving AI. The Academic. (2023) https://theacademic.com/ethical-concerns-over-enslaving-ai/.
arguments_for_ai_risk/list_of_possible_risks_from_ai.txt · Last modified: 2023/09/26 17:00 by jeffreyheninger