User Tools

Site Tools


arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2022/09/21 07:37]
127.0.0.1 external edit
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2023/06/08 21:32]
rickkorzekwa Changed Nates Nate's screen name to his full name
Line 30: Line 30:
  
 <HTML> <HTML>
-<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, So8res, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>+<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, Nate Soares, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>
 </HTML> </HTML>
  
Line 56: Line 56:
 <HTML> <HTML>
 <p><strong>Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.”</strong> Accessed December 9, 2021. <a href="https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety">https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety</a>.</p> <p><strong>Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.”</strong> Accessed December 9, 2021. <a href="https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety">https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety</a>.</p>
-</HTML> 
- 
- 
-<HTML> 
-<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p> 
 </HTML> </HTML>
  
Line 112: Line 107:
 <p><strong>Yudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate">https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate</a>.</p> <p><strong>Yudkowsky, Eliezer, and Robin Hanson. “The Hanson-Yudkowsky AI-Foom Debate – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate">https://www.lesswrong.com/tag/the-hanson-yudkowsky-ai-foom-debate</a>.</p>
 </HTML> </HTML>
 +
 +
 +==== Related (parodies, implicit arguments, counter-counter arguments) ====
 +
 +<HTML>
 +<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p>
 +</HTML>
 +
  
  
arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai.txt · Last modified: 2023/06/08 21:39 by rickkorzekwa