User Tools

Site Tools


arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:second_species_argument_for_ai_xrisk

Second species argument for AI x-risk

This page is incomplete, under active work and may be updated soon.

The second species argument for AI x-risk is an argument that advanced artificial intelligence poses an existential risk to humanity, by analogy to humans posing an existential risk to other species.

Details

Argument

Summary:

  1. Humans dominance over other animal species in controlling the world (including sending them extinct) is primarily due to our superior cognitive abilities.
    1. Humans occupy a unique dominant role in the natural world, in which in any conflict between humans and other animals in desire for land use, humans desires will tend to prevail.
    2. The reason for this situation is that humans are more cognitively capable.
  2. Therefore if another 'species' appears with cognitive abilities superior to humans, humans will lose control over the future, and the future will lose most of its value.
  3. AI will replace humans as 'species' with the most superior cognitive abilities.

Counterarguments

  • Intelligence in animals doesn't appear to generally relate to dominance. For instance, elephants are much more intelligent than moths, and it is not clear that elephants have dominated moths in any sense.
  • The notion of 'cognitive abilities' is very broad. It is not clear which need to be dominant for a species to be dominant. A calculator is cognitively superior to all humans in terms of doing sums but they aren't dominant.
  • Human dominance over other species is plausibly not due to the cognitive abilities of individual humans, but rather because of human ability to communicate and store information through culture and artifacts.

Discussion elsewhere

Joe Carlsmith (2024):

The most succinct argument for AI risk, in my opinion, is the “second species” argument. Basically, it goes like this… Premise 1: AGIs would be like a second advanced species on earth, more powerful than humans… Conclusion: That’s scary…To be clear: this is very far from airtight logic. But I like the intuition pump. Often, if I only have two sentences to explain AI risk, I say this sort of species stuff. “Chimpanzees should be careful about inventing humans.

Richard Ngo ( 2020):

But AIs will eventually become more capable than us at the types of tasks by which we maintain and exert that control. If they don’t want to obey us, then humanity might become only Earth's second most powerful “species”, and lose the ability to create a valuable and worthwhile future.

Stuart Russel (20191)):

the gorilla problem—specifically, the problem of whether humans can maintain their supremacy and autonomy in a world that includes machines with substantially greater intelligence.

Nick Bostrom (2015):

why haven't the chimpanzees flicked the off switch to humanity, or the Neanderthals? They certainly had reasons. We have an off switch, for example, right here. (Choking) The reason is that we are an intelligent adversary; we can anticipate threats and plan around them. But so could a superintelligent agent, and it would be much better at that than we are.

Sam Altman (2015):

The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.

Contributors

Primary author: Katja Grace

Other authors: Nathan Young, Josh Hart

Suggested citation:

Grace, K., Young, N., Hart, J., (2024), Second species argument for AI x-risk, AI Impacts Wiki, https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/second_species_argument_for_ai_xrisk

Notes

1)
Human Compatible: Artificial Intelligence and the Problem of Control
arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/second_species_argument_for_ai_xrisk.txt · Last modified: 2024/08/09 01:13 by katjagrace