User Tools

Site Tools


arguments_for_ai_risk:interviews_on_plausibility_of_ai_safety_by_default

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

arguments_for_ai_risk:interviews_on_plausibility_of_ai_safety_by_default [2022/09/21 07:37] (current)
Line 1: Line 1:
 +====== Interviews on plausibility of AI safety by default ======
 +
 +// Published 02 April, 2020; last updated 15 September, 2020 //
 +
 +<HTML>
 +<p>This is a list of interviews on the plausibility of AI safety by default.</p>
 +</HTML>
 +
 +
 +===== Background =====
 +
 +
 +<HTML>
 +<p>AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions.</p>
 +</HTML>
 +
 +
 +===== List =====
 +
 +
 +<HTML>
 +<ul>
 +<li><div class="li">
 +<a href="/doku.php?id=conversation_notes:conversation_with_ernie_davis">Conversation with Ernie Davis</a>
 +</div></li>
 +<li><div class="li">
 +<a href="/doku.php?id=conversation_notes:conversation_with_rohin_shah">Conversation with Rohin Shah</a>
 +</div></li>
 +<li><div class="li">
 +<a href="/doku.php?id=conversation_notes:conversation_with_paul_christiano">Conversation with Paul Christiano</a>
 +</div></li>
 +<li><div class="li">
 +<a href="/doku.php?id=conversation_notes:conversation_with_adam_gleave">Conversation with Adam Gleave</a>
 +</div></li>
 +<li><div class="li">
 +<a href="/doku.php?id=conversation_notes:conversation_with_robin_hanson">Conversation with Robin Hanson</a>
 +</div></li>
 +</ul>
 +</HTML>
 +
 +
  
arguments_for_ai_risk/interviews_on_plausibility_of_ai_safety_by_default.txt · Last modified: 2022/09/21 07:37 (external edit)