User Tools

Site Tools


ai_timelines:list_of_analyses_of_time_to_human-level_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
ai_timelines:list_of_analyses_of_time_to_human-level_ai [2022/09/21 07:37]
127.0.0.1 external edit
ai_timelines:list_of_analyses_of_time_to_human-level_ai [2022/12/27 10:16] (current)
zachsteinperlman
Line 1: Line 1:
 ====== List of Analyses of Time to Human-Level AI ====== ====== List of Analyses of Time to Human-Level AI ======
  
-// Published 22 January, 2015; last updated 10 December2020 //+// Published 22 January, 2015; last updated 2 November2022 // 
 + 
 + 
 +This is a list of most of the substantial analyses of AI timelines that we know of. It also covers most of the arguments and opinions of which we are aware.
  
-<HTML> 
-<p>This is a list of most of the substantial analyses of AI timelines that we know of. It also covers most of the arguments and opinions of which we are aware.</p> 
-</HTML> 
  
  
Line 11: Line 11:
  
  
-<HTML> + 
-<p>The list below contains substantial publically available analyses of when human-level AI will appear. To qualify for the list, an item must provide both a claim about when human-level artificial intelligence (or a similar technology) will exist, and substantial reasoning to support it. ‘Substantial’ is subjective, but a fairly low bar with some emphasis on detail, novelty, and expertise. We exclude arguments that AI is impossible, though they are technically about AI timelines.</p> +The list below contains substantial publicly available analyses of when human-level AI will appear. To qualify for the list, an item must provide both a claim about when human-level artificial intelligence (or a similar technology) will exist, and substantial reasoning to support it. ‘Substantial’ is subjective, but a fairly low bar with some emphasis on detail, novelty, and expertise. We exclude arguments that AI is impossible, though they are technically about AI timelines. 
-</HTML>+
  
  
Line 19: Line 19:
  
  
-<HTML> + 
-<ul> + 
-<li><div class="li"> + 
-<a href="http://www.tandfonline.com/doi/abs/10.1080/00207237008709398?journalCode=genv20">Good, Some future social repercussions of computers</a> (1970) predicts 1993 give or take a decade, based roughly on the availability of sufficiently cheap, fast, and well-organized electronic components, or on a good understanding of the nature of language, and on the number of neurons in the brain. +  * [[http://www.tandfonline.com/doi/abs/10.1080/00207237008709398?journalCode=genv20|Good, Some future social repercussions of computers]] (1970) predicts 1993 give or take a decade, based roughly on the availability of sufficiently cheap, fast, and well-organized electronic components, or on a good understanding of the nature of language, and on the number of neurons in the brain. 
-                </div></li> +  * [[https://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html|Moravec, Today’s Computers, Intelligent Machines and Our Future]] (1978) projects that ten years later hardware equivalent to a human brain would be cheaply available, and that if software development ‘kept pace’ then machines able to think as well as a human would begin to appear then. 
-<li><div class="li"> +  * [[http://iospress.metapress.com/content/h505v60q46562260/|Solomonoff, The Time Scale of Artificial Intelligence: Reflections on Social Effects]] (1985) estimates one to fifty years to a general theory of intelligence, then ten or fifteen years to a machine with general problem solving capacity near that of a human, in some technical professions. 
-<a href="https://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1978/analog.1978.html">Moravec, Today’s Computers, Intelligent Machines and Our Future</a> (1978) projects that ten years later hardware equivalent to a human brain would be cheaply available, and that if software development ‘kept pace’ then machines able to think as well as a human would begin to appear then. +  * [[http://www.jstor.org/discover/10.2307/20025144?sid=21105674354083&amp;uid=4&amp;uid=2|Waltz, The Prospect for Building Truly Intelligent Machines]] (1988) predicts human-level hardware in 2017 and says the development of human-level AI might take another twenty years. 
-                </div></li> +  * [[http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html|Vinge, The Coming Technological Singularity: How to Survive in the post-Human Era]] (1993) argues for less than thirty years from 1993, largely based on hardware extrapolation. 
-<li><div class="li"> +  * [[http://www.aleph.se/Trans/Global/Singularity/singul.txt|Eder, Re: The Singularity]] (1993) argues for 2035 based on two lines of reasoning: hardware extrapolation to computation equivalent to the human brain, and hyperbolic human population growth pointing to a singularity at that time. 
-<a href="http://iospress.metapress.com/content/h505v60q46562260/">Solomonoff, The Time Scale of Artificial Intelligence: Reflections on Social Effects</a> (1985) estimates one to fifty years to a general theory of intelligence, then ten or fifteen years to a machine with general problem solving capacity near that of a human, in some technical professions. +  * [[https://web.archive.org/web/20200524051751/http://yudkowsky.net/obsolete/singularity.html|Yudkowsky, Staring Into the Singularity 1.2.5]] (1996) presents calculation suggesting a singularity will occur in 2021, based on hardware extrapolation and a simple model of recursive hardware improvement. 
-                </div></li> +  * [[http://www.nickbostrom.com/superintelligence.html|Bostrom, How Long Before Superintelligence?]](1997) argues that it is plausible to expect superintelligence in the first third of the 21st Century. In 2008 he added that he did not think the probability of this was more than half. 
-<li><div class="li"> +  * [[http://www.nickbostrom.com/2050/outsmart.html|Bostrom, When Machines Outsmart Humans]](2000) argues that we should take seriously the prospect of human-level AI before 2050, based on hardware trends and feasibility of uploading or software based on understanding the brain. 
-<a href="http://www.jstor.org/discover/10.2307/20025144?sid=21105674354083&amp;uid=4&amp;uid=2">Waltz, The Prospect for Building Truly Intelligent Machines</a> (1988) predicts human-level hardware in 2017 and says the development of human-level AI might take another twenty years. +  * [[ai_timelines:kurzweil_the_singularity_is_near|Kurzweil, The Singularity is Near]] [[http://hfg-resources.googlecode.com/files/SingularityIsNear.pdf|(pdf)]] (2005) predicts 2029, based mostly on hardware extrapolation and the belief that understanding necessary for software is growing exponentially. He also made a [[http://longbets.org/1/|bet]] with Mitchell Kapor, which he explains along with the bet and [[https://web.archive.org/web/20110720061136/http://www.kurzweilai.net/a-wager-on-the-turing-test-why-i-think-i-will-win|here]]. Mitchell also explains his reasoning alongside the bet, though it nonspecific about timing to the extent that it isn’t clear whether he thinks AI will ever occur, which is why he isn’t included in this list. 
-                </div></li> +  * [[http://archive.today/s45ly|Peter Voss, Increased Intelligence, Improved Life]] [[http://vimeo.com/33959613|(video)]] (2007) predicts less than ten years and probably less than five, based on the perception that other researchers pursue unnecessarily difficult routes, and that shortcuts probably exist. 
-<li><div class="li"> +  * [[http://www.scientificamerican.com/article/rise-of-the-robots/|Moravec, The Rise of the Robots]] (2009) predicts AI rivalling human intelligence well before 2050, based on progress in hardware, estimating how much hardware is equivalent to a human brain, and comparison with animals whose brains appear to be equivalent to present-day computers. Moravec made similar predictions in the 1988 book //Mind Children//
-<a href="http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html">Vinge, The Coming Technological Singularity: How to Survive in the post-Human Era</a> (1993) argues for less than thirty years from 1993, largely based on hardware extrapolation. +  * [[http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/|Legg, Tick, Tock, Tick Tock Bing]] (2009) predicts 2028 in expectation, based on details of progress and what remains to be done in neuroscience and AI. He agreed with this prediction in [[http://www.vetta.org/2011/12/goodbye-2011-hello-2012/|2012]]
-                </div></li> +  * [[ai_timelines:allen_the_singularity_isnt_near|Allen, The Singularity Isn’t Near]] (2011) criticizes Kurzweil’s [[ai_timelines:kurzweil_the_singularity_is_near|prediction]] of a singularity around 2045, based mostly on disagreeing with Kurzweil on rates of brain science and AI progress. 
-<li><div class="li"> +  * [[http://www.hutter1.net/publ/singularity.pdf|Hutter, Can Intelligence Explode]] (2012) uses a prediction of not much later than the 2030s, based on hardware extrapolation, and the belief that software will not lag far behind. 
-<a href="http://www.aleph.se/Trans/Global/Singularity/singul.txt">Eder, Re: The Singularity</a> (1993) argues for 2035 based on two lines of reasoning: hardware extrapolation to computation equivalent to the human brain, and hyperbolic human population growth pointing to a singularity at that time. +  * [[http://consc.net/papers/singularity.pdf|Chalmers, The Singularity: A Philosophical Analysis]] (2010) guesses that human-level AI is more likely than not this century. He points to several early estimates, but expresses skepticism about hardware extrapolation, based on the apparent algorithmic difficulty of AI. He argues that AI should be feasible within centuries (conservatively) based on the possibility of brain emulation, and the past success of evolution. 
-                </div></li> +  * [[http://intelligence.org/files/PredictingAGI.pdf|Fallenstein and Mennen, Predicting AGI: What can we say when we know so little?]] (2013) suggest using a Pareto distribution to model time until we get a clear sign that human-level AI is imminent. They get a median estimate of about 60 years, depending on the exact distribution (based on an estimate of 60 years since the beginning of the field). 
-<li><div class="li"> +  * [[http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation|Drum, Welcome, Robot Overlords. Please Don’t Fire Us?]] (2013) argues for around 2040, based on hardware extrapolation. 
-<a href="http://www.yudkowsky.net/obsolete/singularity.html">Yudkowsky, Staring Into the Singularity 1.2.5</a> (1996) presents calculation suggesting a singularity will occur in 2021, based on hardware extrapolation and a simple model of recursive hardware improvement. +  * [[http://intelligence.org/2013/05/15/when-will-ai-be-created/|Muehlhauser, When will AI be Created?]] (2013) argues for uncertainty, based on surveys being unreliable, hardware trends being insufficient without software, and software being potentially jumpy. 
-                </div></li> +  * [[http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies|Bostrom, Superintelligence]] (2014) concludes that ‘…it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century and that it has a non-trivial chance of being developed considerably sooner or much later…’, based on expert surveys and interviews, such as [[http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/" title="Müller and Bostrom AI Progress Poll|these]]
-<li><div class="li"> +  * [[http://futureoflife.org/PDF/rich_sutton.pdf|Sutton, Creating Human Level AI: How and When?]] (2015) places a 50% chance on human-level AI by 2040, based largely on hardware extrapolation and the view that software has a 1/2 chance of following within a decade of sufficient hardware. 
-<a href="http://www.nickbostrom.com/superintelligence.html">Bostrom, How Long Before Superintelligence?</a>(1997) argues that it is plausible to expect superintelligence in the first third of the 21st Century. In 2008 he added that he did not think the probability of this was more than half. +  * [[https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP|Cotra, 2020 Draft Report on Biological Anchors]] (2020) predicts a median of 2050 for when someone will be able to develop transformative AI by extrapolating trends in hardware, spending, and algorithmic progress and using biology-inspired estimates of the effective compute required to train transformative AI. See also [[https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines|discussion on LessWrong]] and [[https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines|Cotra’s 2022 update]]. 
-                </div></li> +  * [[https://www.lesswrong.com/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute|Kokotajlo, Fun with +12 OOMs of Compute]] (2021) argues that TAI will probably appear before 2040, because it could probably be made with training compute of 10^29 floating-point operations and there will probably be training runs that big by 2040. 
-<li><div class="li"> +  * [[https://www.openphilanthropy.org/research/semi-informative-priors-over-ai-timelines/|Davidson, Semi-informative priors over AI timelines]] (2021) predicts a 20% chance of AGI by 2100, just using facts like how long humanity has been working on AGI, how long it has taken to solve other problems in AGI's reference class, and inputs to AI.
-<a href="http://www.nickbostrom.com/2050/outsmart.html">Bostrom, When Machines Outsmart Humans</a>(2000) argues that we should take seriously the prospect of human-level AI before 2050, based on hardware trends and feasibility of uploading or software based on understanding the brain. +
-                </div></li> +
-<li><div class="li"> +
-<a href="/doku.php?id=ai_timelines:kurzweil_the_singularity_is_near" title="Kurzweil, The Singularity is Near">Kurzweil, The Singularity is Near</a><sup> </sup><a href="http://hfg-resources.googlecode.com/files/SingularityIsNear.pdf">(pdf)</a> (2005) predicts 2029, based mostly on hardware extrapolation and the belief that understanding necessary for software is growing exponentially. He also made a <a href="http://longbets.org/1/">bet</a> with Mitchell Kapor, which he explains along with the bet and <a href="https://web.archive.org/web/20110720061136/http://www.kurzweilai.net/a-wager-on-the-turing-test-why-i-think-i-will-win">here</a>. Mitchell also explains his reasoning alongside the bet, though it nonspecific about timing to the extent that it isn’t clear whether he thinks AI will ever occur, which is why he isn’t included in this list. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://archive.today/s45ly">Peter Voss, Increased Intelligence, Improved Life</a> <a href="http://vimeo.com/33959613">(video)</a> (2007) predicts less than ten years and probably less than five, based on the perception that other researchers pursue unnecessarily difficult routes, and that shortcuts probably exist. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://www.scientificamerican.com/article/rise-of-the-robots/">Moravec, The Rise of the Robots</a> (2009) predicts AI rivalling human intelligence well before 2050, based on progress in hardware, estimating how much hardware is equivalent to a human brain, and comparison with animals whose brains appear to be equivalent to present-day computers. Moravec made similar predictions in the 1988 book <em>Mind Children</em>+
-                </div></li> +
-<li><div class="li"> +
-<a href="http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/">Legg, Tick, Tock, Tick Tock Bing</a> (2009) predicts 2028 in expectation, based on details of progress and what remains to be done in neuroscience and AI. He agreed with this prediction in <a href="http://www.vetta.org/2011/12/goodbye-2011-hello-2012/">2012</a>+
-                </div></li> +
-<li><div class="li"> +
-<a href="/doku.php?id=ai_timelines:allen_the_singularity_isnt_near" title="Allen, The Singularity Isn’t Near">Allen, The Singularity Isn’t Near</a><sup> </sup>(2011) criticizes Kurzweil’s <a href="/doku.php?id=ai_timelines:kurzweil_the_singularity_is_near" title="Kurzweil, The Singularity is Near">prediction</a> of a singularity around 2045, based mostly on disagreeing with Kurzweil on rates of brain science and AI progress. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://www.hutter1.net/publ/singularity.pdf">Hutter, Can Intelligence Explode</a> (2012) uses a prediction of not much later than the 2030s, based on hardware extrapolation, and the belief that software will not lag far behind. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://consc.net/papers/singularity.pdf">Chalmers (2010)</a> guesses that human-level AI is more likely than not this century. He points to several early estimates, but expresses skepticism about hardware extrapolation, based on the apparent algorithmic difficulty of AI. He argues that AI should be feasible within centuries (conservatively) based on the possibility of brain emulation, and the past success of evolution. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://intelligence.org/files/PredictingAGI.pdf">Fallenstein and Mennen</a> (2013) suggest using a Pareto distribution to model time until we get a clear sign that human-level AI is imminent. They get a median estimate of about 60 years, depending on the exact distribution (based on an estimate of 60 years since the beginning of the field). +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://www.motherjones.com/media/2013/05/robots-artificial-intelligence-jobs-automation">Drum, Welcome, Robot Overlords. Please Don’t Fire Us?</a> (2013) argues for around 2040, based on hardware extrapolation. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://intelligence.org/2013/05/15/when-will-ai-be-created/">Muehlhauser, When will AI be Created?</a> (2013) argues for uncertainty, based on surveys being unreliable, hardware trends being insufficient without software, and software being potentially jumpy. +
-                </div></li> +
-<li><div class="li"> +
-<a href="http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies">Bostrom, Superintelligence</a> (2014) concludes that ‘…it may be reasonable to believe that human-level machine intelligence has a fairly sizeable chance of being developed by mid-century and that it has a non-trivial chance of being developed considerably sooner or much later…’, based on expert surveys and interviews, such as <a href="http://aiimpacts.wpengine.com/muller-and-bostrom-ai-progress-poll/" title="Müller and Bostrom AI Progress Poll">these</a>+
-                </div></li> +
-<li><div class="li"> +
-<a href="http://futureoflife.org/PDF/rich_sutton.pdf">Sutton, Creating Human Level AI: How and When?</a> (2015) places a 50% chance on human-level AI by 2040, based largely on hardware extrapolation and the view that software has a 1/2 chance of following within a decade of sufficient hardware. +
-                </div></li> +
-</ul> +
-</HTML>+
  
  
  
ai_timelines/list_of_analyses_of_time_to_human-level_ai.1663745861.txt.gz · Last modified: 2022/09/21 07:37 by 127.0.0.1