User Tools

Site Tools


List of Analyses of Time to Human-Level AI

Published 22 January, 2015; last updated 2 November, 2022

This is a list of most of the substantial analyses of AI timelines that we know of. It also covers most of the arguments and opinions of which we are aware.


The list below contains substantial publicly available analyses of when human-level AI will appear. To qualify for the list, an item must provide both a claim about when human-level artificial intelligence (or a similar technology) will exist, and substantial reasoning to support it. ‘Substantial’ is subjective, but a fairly low bar with some emphasis on detail, novelty, and expertise. We exclude arguments that AI is impossible, though they are technically about AI timelines.


  • Good, Some future social repercussions of computers (1970) predicts 1993 give or take a decade, based roughly on the availability of sufficiently cheap, fast, and well-organized electronic components, or on a good understanding of the nature of language, and on the number of neurons in the brain.
  • Moravec, Today’s Computers, Intelligent Machines and Our Future (1978) projects that ten years later hardware equivalent to a human brain would be cheaply available, and that if software development ‘kept pace’ then machines able to think as well as a human would begin to appear then.
  • Solomonoff, The Time Scale of Artificial Intelligence: Reflections on Social Effects (1985) estimates one to fifty years to a general theory of intelligence, then ten or fifteen years to a machine with general problem solving capacity near that of a human, in some technical professions.
  • Waltz, The Prospect for Building Truly Intelligent Machines (1988) predicts human-level hardware in 2017 and says the development of human-level AI might take another twenty years.
  • Vinge, The Coming Technological Singularity: How to Survive in the post-Human Era (1993) argues for less than thirty years from 1993, largely based on hardware extrapolation.
  • Eder, Re: The Singularity (1993) argues for 2035 based on two lines of reasoning: hardware extrapolation to computation equivalent to the human brain, and hyperbolic human population growth pointing to a singularity at that time.
  • Yudkowsky, Staring Into the Singularity 1.2.5 (1996) presents calculation suggesting a singularity will occur in 2021, based on hardware extrapolation and a simple model of recursive hardware improvement.
  • Bostrom, How Long Before Superintelligence?(1997) argues that it is plausible to expect superintelligence in the first third of the 21st Century. In 2008 he added that he did not think the probability of this was more than half.
  • Bostrom, When Machines Outsmart Humans(2000) argues that we should take seriously the prospect of human-level AI before 2050, based on hardware trends and feasibility of uploading or software based on understanding the brain.
  • Kurzweil, The Singularity is Near (pdf) (2005) predicts 2029, based mostly on hardware extrapolation and the belief that understanding necessary for software is growing exponentially. He also made a bet with Mitchell Kapor, which he explains along with the bet and here. Mitchell also explains his reasoning alongside the bet, though it nonspecific about timing to the extent that it isn’t clear whether he thinks AI will ever occur, which is why he isn’t included in this list.
  • Peter Voss, Increased Intelligence, Improved Life (video) (2007) predicts less than ten years and probably less than five, based on the perception that other researchers pursue unnecessarily difficult routes, and that shortcuts probably exist.
  • Moravec, The Rise of the Robots (2009) predicts AI rivalling human intelligence well before 2050, based on progress in hardware, estimating how much hardware is equivalent to a human brain, and comparison with animals whose brains appear to be equivalent to present-day computers. Moravec made similar predictions in the 1988 book Mind Children.
  • Legg, Tick, Tock, Tick Tock Bing (2009) predicts 2028 in expectation, based on details of progress and what remains to be done in neuroscience and AI. He agreed with this prediction in 2012.
  • Allen, The Singularity Isn’t Near (2011) criticizes Kurzweil’s prediction of a singularity around 2045, based mostly on disagreeing with Kurzweil on rates of brain science and AI progress.
  • Hutter, Can Intelligence Explode (2012) uses a prediction of not much later than the 2030s, based on hardware extrapolation, and the belief that software will not lag far behind.
  • Chalmers, The Singularity: A Philosophical Analysis (2010) guesses that human-level AI is more likely than not this century. He points to several early estimates, but expresses skepticism about hardware extrapolation, based on the apparent algorithmic difficulty of AI. He argues that AI should be feasible within centuries (conservatively) based on the possibility of brain emulation, and the past success of evolution.
  • Fallenstein and Mennen, Predicting AGI: What can we say when we know so little? (2013) suggest using a Pareto distribution to model time until we get a clear sign that human-level AI is imminent. They get a median estimate of about 60 years, depending on the exact distribution (based on an estimate of 60 years since the beginning of the field).
  • Drum, Welcome, Robot Overlords. Please Don’t Fire Us? (2013) argues for around 2040, based on hardware extrapolation.
  • Muehlhauser, When will AI be Created? (2013) argues for uncertainty, based on surveys being unreliable, hardware trends being insufficient without software, and software being potentially jumpy.
  • Bostrom, Superintelligence (2014) concludes that ‘…it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century and that it has a non-trivial chance of being developed considerably sooner or much later…’, based on expert surveys and interviews, such as these.
  • Sutton, Creating Human Level AI: How and When? (2015) places a 50% chance on human-level AI by 2040, based largely on hardware extrapolation and the view that software has a 1/2 chance of following within a decade of sufficient hardware.
  • Cotra, 2020 Draft Report on Biological Anchors (2020) predicts a median of 2050 for when someone will be able to develop transformative AI by extrapolating trends in hardware, spending, and algorithmic progress and using biology-inspired estimates of the effective compute required to train transformative AI. See also discussion on LessWrong and Cotra’s 2022 update.
  • Kokotajlo, Fun with +12 OOMs of Compute (2021) argues that TAI will probably appear before 2040, because it could probably be made with training compute of 10^29 floating-point operations and there will probably be training runs that big by 2040.
  • Davidson, Semi-informative priors over AI timelines (2021) predicts a 20% chance of AGI by 2100, just using facts like how long humanity has been working on AGI, how long it has taken to solve other problems in AGI's reference class, and inputs to AI.
ai_timelines/list_of_analyses_of_time_to_human-level_ai.txt · Last modified: 2022/12/27 10:16 by zachsteinperlman