Published 23 January, 2014; last updated 07 August, 2022
‘Human-level AI’ refers to AI which can reproduce everything a human can do, approximately. Several variants of this concept are worth distinguishing.
Considerations in specifying ‘human-level AI’ more precisely:
The 2016 and 2022 Expert Surveys on Progress in AI use the following definition:
Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers. Ignore aspects of tasks for which being a human is intrinsically advantageous, e.g. being accepted as a jury member. Think feasibility, not adoption.
A ‘superhuman’ system is meaningfully more capable than a human-level system. In practice the first human-level system is likely to be superhuman.
In common usage, ‘human-level’ AI can mean either AI which can reproduce a human at any cost and speed, or AI which can replace a human (i.e. is as cheap as a human, and can be used in the same situations). Both are relevant for different issues. For instance, the ‘at any cost’ meaning is important when considering how people will respond to human-level artificial intelligence, or whether a human-level artificial intelligence will use illicit means to acquire resources and cause destruction. Human-level at human cost is the relevant concept when thinking about AI replacing humans in the labor market, the economy growing very fast, or legitimate AI development ramping up into an intelligence explosion.
Today few applications are more than an order of magnitude more expensive to run than a human, suggesting a short time before an AI project came down in price to the cost of a human. However some applications are more expensive, and even if an early AI project were only a few orders of magnitude more expensive than a human per time, it may be much slower. Thus it is hard to make useful inferences about the potential time delay between an arbitrarily expensive human-level AI and an AI which might replace a human, even if we assume hardware continues to fall in price regularly.
As explained at the Superintelligence Reading Group:
Another thing to be aware of is the diversity of mental skills. If by ‘human-level’ we mean a machine that is at least as good as a human at each of these skills, then in practice the first ‘human-level’ machine will be much better than a human on many of those skills. It may not seem ‘human-level’ so much as ‘very super-human’.
We could instead think of human-level as closer to ‘competitive with a human’ – where the machine has some super-human talents and lacks some skills humans have. This is not usually used, I think because it is hard to define in a meaningful way. There are already machines for which a company is willing to pay more than a human: in this sense a microscope might be ‘super-human’. There is no reason for a machine which is equal in value to a human to have the traits we are interested in talking about here, such as agency, superior cognitive abilities or the tendency to drive humans out of work and shape the future. Thus we talk about AI which is at least as good as a human, but you should beware that the predictions made about such an entity may apply before the entity is technically ‘human-level’.