User Tools

Site Tools


List of sources arguing for existential risk from AI

Published 06 August, 2022

This page is incomplete, under active work and may be updated soon.

This is a bibliography of pieces arguing that AI poses an existential risk.


Adamczewski, Tom. “A Shift in Arguments for AI Risk.” Fragile Credences. Accessed October 20, 2020.

Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete Problems in AI Safety.” ArXiv:1606.06565 [Cs], July 25, 2016.

Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, Nate Soares, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.” Accessed August 6, 2022.

Bostrom, N., Superintelligence, Oxford University Press, 2014.

Carlsmith, Joseph. “Is Power-Seeking AI an Existential Risk? [Draft].” Open Philanthropy Project, April 2021.

Christian, Brian. The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, 2021.

Christiano, Paul. “What Failure Looks Like.” AI Alignment Forum (blog), March 17, 2019.

Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.” Accessed December 9, 2021.

Hubinger, Evan, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. “Risks from Learned Optimization in Advanced Machine Learning Systems,” June 5, 2019.

Ngo, Richard. “Thinking Complete: Disentangling Arguments for the Importance of AI Safety.” Thinking Complete (blog), January 21, 2019. (Also LessWrong and the Alignment Forum, with relevant comment threads.)

Ngo, Richard. “AGI Safety from First Principles,” September 28, 2020.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Illustrated Edition. New York: Hachette Books, 2020.

Piper, Kelsey. “The Case for Taking AI Seriously as a Threat to Humanity.” Vox, December 21, 2018.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. Viking, 2019.

Turner, Alexander Matt, Logan Smith, Rohin Shah, Andrew Critch, and Prasad Tadepalli. “Optimal Policies Tend to Seek Power.” ArXiv:1912.01683 [Cs], December 3, 2021.

Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 46. New York, n.d.

Yudkowsky, Eliezer, Rob Bensinger, and So8res. “2022 MIRI Alignment Discussion – LessWrong.” Accessed August 6, 2022.

Yudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.” Accessed August 6, 2022.

Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.” ArXiv:1703.10987 [Physics], March 31, 2017.

See also


arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai.txt · Last modified: 2023/06/08 21:39 by rickkorzekwa