User Tools

Site Tools


arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2022/09/21 07:37]
127.0.0.1 external edit
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2023/06/08 21:39] (current)
rickkorzekwa Updated links to point to Wiki instead of main site
Line 30: Line 30:
  
 <HTML> <HTML>
-<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, So8res, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>+<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, Nate Soares, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>
 </HTML> </HTML>
  
Line 56: Line 56:
 <HTML> <HTML>
 <p><strong>Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.”</strong> Accessed December 9, 2021. <a href="https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety">https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety</a>.</p> <p><strong>Dai, Wei. “Comment on Disentangling Arguments for the Importance of AI Safety – LessWrong.”</strong> Accessed December 9, 2021. <a href="https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety">https://www.lesswrong.com/posts/JbcWQCxKWn3y49bNB/disentangling-arguments-for-the-importance-of-ai-safety</a>.</p>
-</HTML> 
- 
- 
-<HTML> 
-<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p> 
 </HTML> </HTML>
  
Line 114: Line 109:
  
  
-===== See also ===== +==== Related (parodies, implicit arguments, counter-counter arguments) ====
  
 <HTML> <HTML>
-<ul> +<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark“On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p>
-<li><div class="li"> +
-<a href="/doku.php?id=arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai">List of sources arguing against existential risk from AI</a> +
-</div></li> +
-<li><div class="li"> +
-<a href="https://aiimpacts.org/does-ai-pose-an-existential-risk/">Is AI an existential threat to humanity?</a> +
-</div></li> +
-</ul>+
 </HTML> </HTML>
  
  
-<HTML> + 
-<p><em>Primary author: Katja Grace</em></p> +===== See also ===== 
-</HTML>+ 
 + 
 +  * [[arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai|List of sources arguing against existential risk from AI]] 
 +  * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential risk to humanity?]] 
 + 
 + 
 +//Primary author: Katja Grace// 
  
  
arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai.1663745861.txt.gz · Last modified: 2022/09/21 07:37 by 127.0.0.1