Published 12 August, 2019; last updated 07 October, 2019
This is a list of published arguments that we know of that current methods in artificial intelligence will not lead to human-level AI.
We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.1) We have not precisely defined ‘current methods’. Many of the works we cite refer to currently dominant methods such as machine learning (especially deep learning) and reinforcement learning.
By human-level AI, we mean AI with a level of performance comparable to humans. We have in mind the operationalization of ‘high-level machine intelligence’ from our 2016 expert survey on progress in AI: “Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.”2
Because we are considering intelligent performance, we have deliberately excluded arguments that AI might lack certain ‘internal’ features, even if it manifests human-level performance.3 4 We assume, concurring with Chalmers (2010), that “If there are systems that produce apparently [human-level intelligent] outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact.”5
We read well-known criticisms of current AI approaches of which we were already aware. Using these as a starting point, we searched for further sources and solicited recommendations from colleagues familiar with artificial intelligence.
We include arguments that sound plausible to us, or that we believe other researchers take seriously. Beyond that, we take no stance on the relative strengths and weaknesses of these arguments.
We cite works that plausibly support pessimism about current methods, regardless of whether the works in question (or their authors) actually claim that current methods will not lead to human-level artificial intelligence.
We do not include arguments that serve primarily as undercutting defeaters of positive arguments that current methods will lead to human-level intelligence. For example, we do not include arguments that recent progress in machine learning has been overstated.
These arguments might overlap in various ways, depending on how one understands them. For example, some of the challenges for current methods might be special instances of more general challenges.
These arguments are ‘inside view’ in that they look at the specifics of current methods.
Some researchers claim that there are capacities which are required for human-level intelligence, but difficult or impossible to engineer with current methods.8 Some commonly-cited capacities are:
These arguments are ‘outside view’ in that they look at “a class of cases chosen to be similar in relevant respects”21 to current artificial intelligence research, without looking at the specifics of current methods.
Robert Long and Asya Bergal contributed research and writing.
Featured image from www.extremetech.com.