User Tools

Site Tools


ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:kruel_ai_interviews

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:kruel_ai_interviews [2022/09/21 07:37] (current)
Line 1: Line 1:
 +====== Kruel AI Interviews ======
 +
 +// Published 29 December, 2014; last updated 10 December, 2020 //
 +
 +<HTML>
 +<p>Alexander Kruel <a href="http://wiki.lesswrong.com/wiki/Interview_series_on_risks_from_AI">interviewed</a> 37 experts on areas related to AI, starting in 2011 and probably ending in 2012. Of those answering the question in a full quantitative way, median estimates for human-level AI (assuming business as usual) were 2025, 2035 and 2070 for 10%, 50% and 90% probabilities respectively. It appears that most respondents found human extinction as a result of human-level AI implausible.</p>
 +</HTML>
 +
 +
 +
 +===== Details =====
 +
 +
 +==== AI timelines question ====
 +
 +
 +<HTML>
 +<p>Kruel asked each interviewee something similar to “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” Twenty respondents gave full quantitative answers. For those, the median estimates were 2025, 2035 and 2070 for 10%, 50% and 90% respectively, according to <a href="https://docs.google.com/spreadsheet/ccc?key=0AvoX2xCTgYnWdFlCajk5a0d0bG5Ld1hYUEQzaS1aQWc&amp;usp=sharing#gid=0">this spreadsheet</a> (belonging to Luke Muehlhauser).</p>
 +</HTML>
 +
 +
 +==== AI risk question ====
 +
 +
 +<HTML>
 +<p>Alexander asked each interviewee something like:</p>
 +</HTML>
 +
 +
 +<HTML>
 +<blockquote>
 +<p>‘What probability do you assign to the possibility of human extinction as a result of badly done AI?</p>
 +<p>Explanatory remark to Q2:<br/>
 +                P(human extinction | badly done AI) = ?<br/>
 +                (Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)</p>
 +</blockquote>
 +</HTML>
 +
 +
 +<HTML>
 +<p>An arbitrary selection of (abridged) responses; parts that answer the question relatively directly are emboldened:</p>
 +</HTML>
 +
 +
 +<HTML>
 +<ul>
 +<li><div class="li">Brandon Rohrer: <b>&lt;1%</b></div></li>
 +<li><div class="li">Tim Finin: <b>.001</b></div></li>
 +<li><div class="li">Pat Hayes: <b>Zero</b>. The whole idea is ludicrous.</div></li>
 +<li><div class="li">Pei Wang: I don’t think it makes much sense to talk about “probability” here, except to drop all of its mathematical meaning…</div></li>
 +<li><div class="li">J. Storrs Hall: …<b>unlikely but not inconcievable.</b> If it happens…it will be because the AI was part of a doomsday device probably built by some military for “mutual assured destruction”, and some other military tried to call their bluff. …</div></li>
 +<li><div class="li">Paul Cohen: From where I sit today, <b>near zero</b>….</div></li>
 +<li><div class="li">William Uther: …Personally, I don’t think ‘Terminator’ style machines run amok is a very likely scenario….</div></li>
 +<li><div class="li">Kevin Korb: …<strong>we have every prospect</strong> of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all…</div></li>
 +<li><div class="li">The ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime</div></li>
 +<li><div class="li">Michael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance)…As to extinction, we will only not go extinct if our robot masters decide to keep some of us around…</div></li>
 +<li><div class="li">Peter Gacs: …<b>near 1%</b>…</div></li>
 +</ul>
 +</HTML>
 +
 +
 +==== Interviewees ====
 +
 +
 +<HTML>
 +<p>The MIRI dataset (to be linked soon) contains all of the ‘full’ predictions mentioned above, and seven more from the Kruel interviews that had sufficient detail for its purposes. Of those 27 participants, we class 10 as AGI researchers, 13 as other AI researchers, 1 as a futurist, and 3 as none of the above.</p>
 +</HTML>
 +
 +
  
ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/kruel_ai_interviews.txt · Last modified: 2022/09/21 07:37 (external edit)