Published 29 December, 2014; last updated 10 December, 2020
The Future of Humanity Institute administered a survey in 2011 at their Winter Intelligence AGI impacts conference. Participants’ median estimate for a 50% chance of human-level AI was 2050.
The survey included the question: “Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer ‘never’ if you believe such a milestone will never be reached.”
The first quartile / second quartile / third quartile responses to each of these three questions were as follows:
10% chance: 2015 / 2028 / 2030
50% chance: 2040 / 2050 / 2080
90% chance: 2100 / 2150 / 2250
Survey participants probably expect AI sooner than comparably expert groups, by virtue of being selected from participants at the Winter Intelligence conference. The conference is described as focussing on “artificial intelligence and the impacts it will have on the world,” a topic of disproportionately great natural interest to researchers who believe that AI will substantially impact the world soon. The response rate to the survey was 41% (35 respondents), limiting response bias.
When asked “Prior to this conference, how much have you thought about these issues?” the respondents were roughly evenly divided between “Significant interest,” “Minor research focus / sustained study,” and “major research focus.”
When asked to describe their field, of the 35 respondents, 22% indicated an area that the survey administrators considered to be “AI and Robotics” as their field, 22% indicated a field considered to be “computer science and engineering,” and the remainder indicated a variety of fields with less direct relevant to AI progress (excepting perhaps cognitive science and neuroscience, whose prevalence the authors do not report). The administrators of the survey write:
“There were no significant (as per ANOVA) inter-group differences in regards to who would develop AI, the outcomes, type of AI, expertise, or likelihood of Watson winning. Merging the AI and computer science group and the philosophy and general academia group did not change anything: participant views did not link strongly to their formal background. ” (p. 10).