User Tools

Site Tools


arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2022/12/13 05:04]
katjagrace
arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai [2023/06/08 21:39] (current)
rickkorzekwa Updated links to point to Wiki instead of main site
Line 30: Line 30:
  
 <HTML> <HTML>
-<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, So8res, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>+<p><strong>Bensinger, Rob, Eliezer Yudkowsky, Richard Ngo, Nate Soares, Holden Karnofsky, Ajeya Cotra, Carl Shulman, and Rohin Shah. “2021 MIRI Conversations – LessWrong.”</strong> Accessed August 6, 2022. <a href="https://www.lesswrong.com/s/n945eovrA3oDueqtq">https://www.lesswrong.com/s/n945eovrA3oDueqtq</a>.</p>
 </HTML> </HTML>
  
Line 109: Line 109:
  
  
-===== See also ===== +==== Related (parodies, implicit arguments, counter-counter arguments) ====
  
 <HTML> <HTML>
-<ul> +<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark“On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p>
-<li><div class="li"> +
-<a href="/doku.php?id=arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai">List of sources arguing against existential risk from AI</a> +
-</div></li> +
-<li><div class="li"> +
-<a href="https://aiimpacts.org/does-ai-pose-an-existential-risk/">Is AI an existential threat to humanity?</a> +
-</div></li> +
-</ul>+
 </HTML> </HTML>
  
-==== Related non-arguments ==== 
  
-<HTML> 
-<p><strong>Garfinkel, Ben, Miles Brundage, Daniel Filan, Carrick Flynn, Jelena Luketina, Michael Page, Anders Sandberg, Andrew Snyder-Beattie, and Max Tegmark. “On the Impossibility of Supersized Machines.”</strong> <em>ArXiv:1703.10987 [Physics]</em>, March 31, 2017. <a href="http://arxiv.org/abs/1703.10987">http://arxiv.org/abs/1703.10987</a>.</p> 
-</HTML> 
  
 +===== See also =====
 +
 +
 +  * [[arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai|List of sources arguing against existential risk from AI]]
 +  * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential risk to humanity?]]
 +
 +
 +//Primary author: Katja Grace//
  
-<HTML> 
-<p><em>Primary author: Katja Grace</em></p> 
-</HTML> 
  
  
arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai.1670907869.txt.gz · Last modified: 2022/12/13 05:04 by katjagrace