User Tools

Site Tools


ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:kruel_ai_interviews

Kruel AI Interviews

Published 29 December, 2014; last updated 10 December, 2020

Alexander Kruel interviewed 37 experts on areas related to AI, starting in 2011 and probably ending in 2012. Of those answering the question in a full quantitative way, median estimates for human-level AI (assuming business as usual) were 2025, 2035 and 2070 for 10%, 50% and 90% probabilities respectively. It appears that most respondents found human extinction as a result of human-level AI implausible.

Details

AI timelines question

Kruel asked each interviewee something similar to “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” Twenty respondents gave full quantitative answers. For those, the median estimates were 2025, 2035 and 2070 for 10%, 50% and 90% respectively, according to this spreadsheet (belonging to Luke Muehlhauser).

AI risk question

Alexander asked each interviewee something like:

‘What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:
P(human extinction | badly done AI) = ?
(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)

An arbitrary selection of (abridged) responses; parts that answer the question relatively directly are emboldened:

  • Brandon Rohrer: <1%
  • Tim Finin: .001
  • Pat Hayes: Zero. The whole idea is ludicrous.
  • Pei Wang: I don’t think it makes much sense to talk about “probability” here, except to drop all of its mathematical meaning…
  • J. Storrs Hall: …unlikely but not inconcievable. If it happens…it will be because the AI was part of a doomsday device probably built by some military for “mutual assured destruction”, and some other military tried to call their bluff. …
  • Paul Cohen: From where I sit today, near zero….
  • William Uther: …Personally, I don’t think ‘Terminator’ style machines run amok is a very likely scenario….
  • Kevin Korb: …we have every prospect of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all…
  • The ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime
  • Michael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance)…As to extinction, we will only not go extinct if our robot masters decide to keep some of us around…
  • Peter Gacs: …near 1%

Interviewees

The MIRI dataset (to be linked soon) contains all of the ‘full’ predictions mentioned above, and seven more from the Kruel interviews that had sufficient detail for its purposes. Of those 27 participants, we class 10 as AGI researchers, 13 as other AI researchers, 1 as a futurist, and 3 as none of the above.

ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/kruel_ai_interviews.txt · Last modified: 2022/09/21 07:37 (external edit)