Published 29 December, 2014; last updated 10 December, 2020
Alexander Kruel interviewed 37 experts on areas related to AI, starting in 2011 and probably ending in 2012. Of those answering the question in a full quantitative way, median estimates for human-level AI (assuming business as usual) were 2025, 2035 and 2070 for 10%, 50% and 90% probabilities respectively. It appears that most respondents found human extinction as a result of human-level AI implausible.
Kruel asked each interviewee something similar to “Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans (or better, perhaps unevenly) at science, mathematics, engineering and programming?” Twenty respondents gave full quantitative answers. For those, the median estimates were 2025, 2035 and 2070 for 10%, 50% and 90% respectively, according to this spreadsheet (belonging to Luke Muehlhauser).
Alexander asked each interviewee something like:
‘What probability do you assign to the possibility of human extinction as a result of badly done AI?
Explanatory remark to Q2:
P(human extinction | badly done AI) = ?
(Where ‘badly done’ = AGI capable of self-modification that is not provably non-dangerous.)
An arbitrary selection of (abridged) responses; parts that answer the question relatively directly are emboldened:
The MIRI dataset (to be linked soon) contains all of the ‘full’ predictions mentioned above, and seven more from the Kruel interviews that had sufficient detail for its purposes. Of those 27 participants, we class 10 as AGI researchers, 13 as other AI researchers, 1 as a futurist, and 3 as none of the above.