arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:list_of_arguments_that_ai_poses_an_xrisk

List of arguments that AI poses an existential risk

This page is under active work and may currently be incoherent or inaccurate.

This is a list of arguments that future progress in artificial intelligence may bring about the extinction of humankind or drastically limit human influence over the long-run future.

Arguments

This list does not contain evidence from opinion.

Risk from competent malign agents

(Main article: Argument for AI X-risk from competent malign agents)

Summary:

  1. Some advanced AI systems will very likely be built to pursue 'goals'
  2. The aggregate goals of these systems may tend to be bad.
  3. Such systems will likely have the power to achieve their goals even against the will of humans
  4. Thus, there is some chance that the future will proceed in opposition to long-run human welfare, because these advanced AI systems will succeed in their (bad) goals

Key counter-arguments:

  • Human agents currently coordinate even with very different goals, some of which are bad

Second species argument

(Main article: Second species argument for AI x-risk)

Summary:

  1. Humans dominance over other animal species in controlling the world (including sending them extinct) is primarily due to our superior cognitive abilities.
  2. Therefore if another 'species' appears with cognitive abilities superior to humans, humans will lose control over the future, and the future will lose most of its value.
  3. AI will replace humans as 'species' with the most superior cognitive abilities.

Key counter-arguments:

  • Intelligence in animals doesn't appear to generally relate to dominance (insects are found everywhere but are not very intelligent.
  • The notion of 'cognitive abilities' is very broad. It is not clear which need to be dominant for a species to be dominant. A calculator is cognitively superior to all humans in terms of doing sums but they aren't dominant.

Endorsed by:

Loss of control via inferiority

Summary:

  1. AI systems will ultimately be much more competent than humans
  2. Thus most decisions will probably be allocated to AI systems, because they will make more competent decisions
  3. Also, AI systems will often be able to choose to make decisions not intentionally allocated to them, because their superior decision-making will allow them to manipulate situations markedly better than humans can.
  4. If AI systems make most decisions, humans will lose control the future
  5. If humans don't control the future, there is a high chance the future will be bad according to human values

Versions:

Version of this argument may work with different forms of AI superiority:

  1. quality of thought
  2. speed
  3. number
  4. copyability
  5. co-ordination
  6. transparency
  7. non-susceptability to permanent death
  8. other AI advantages

Key counter-arguments:

  • Humans do not generally seem to become disempowered by possession of software that is far superior to them, even if it makes many 'decisions' in the process of carrying out their will

Loss of control via speed

Summary:

  1. Advancing AI may tend to produce very rapid changes
  2. Faster change reduces the ability of groups of humans to maintain safety e.g. reviewing and understanding it, responding to problems as they appear, adjusting course, preparing, negotiating.
  3. The pace of events could become so fast as to allow for negligible human safety efforts
  4. Human efforts to maintain safety may be important for avoiding arbitrarily bad situations

Versions:

This argument may work with several versions of speed:

  1. AI systems will likely act much faster than the human activity they will replace: this is a form of 'Argument from loss of control via superiority'
  2. New AI systems will be developed much faster than similarly impactful technologies previously have
  3. AI systems will produce new non-AI technologies (e.g. weapons) much faster than similarly impactful non-AI technologies previously
  4. Technological changes will lead to changes in society much faster than previously

Key counter-arguments:

  • The burden of proof could be high for an implausible event such as the destruction of humanity (as opposed to smaller scales of catastrophe)
  • This argument also seems to support concern about a wide range of technologies, It is unclear if this is predictive of which are worth worrying about

Vulnerable world triggered by AI

Summary:

  1. Technologies may be possible that grant extreme destructive capabilities to small groups without granting other people defensive capabilities
  2. Some people and collectives would like to destroy humanity, or would risk that for other aims
  3. Advanced AI may be such a technology, or may produce such technologies

Key counter-arguments:

  • These arguments appear to raise the chance of such a scenario, but not massively.

AI empowers lone actors on catastrophic projects

Summary:

  1. Until now, large projects have required the labor of many people
  2. This disadvantages projects most people would not work on, and more strongly disadvantages projects others would work to end if they knew of them, e.g. highly destructive or already illegal projects.
  3. Projects to end humanity are in both of these categories, thus have been disadvantaged until now.
  4. AI will allow large projects to proceed with minimal human labor, and thus with the cooperation of very few people
  5. Thus AI will make projects to destroy the world easier, raising the chance of one succeeding.

Key counterarguments:

  • This argument poses an effect, but says nothing about its strength.

Normal people's utopias are catastrophic to one another

Summary:

  1. People who broadly agree on good outcomes within the current world may, given much more power, want outcomes that one another would consider catastrophic. e.g. A utilitarian and a Christian might both work to reduce poverty now, but with much more control, the utilitarian might replace humans with efficient pleasure-producing systems without knowledge of the real world, and the Christian may dedicate most resources to glorifying God, and both may consider the other future a radical loss.
  2. AI may empower some humans or human groups to bring about futures closer to what they desire
  3. From 1, that may be catastrophic according to the values of most other humans

Powerful technologies raise the chance of catastrophic accidents

(Main article: Argument for AI x-risk from potential for accidents and misuse)

Summary:

  1. Advanced AI could yield powerful destructive capabilities such as new weapons, new computer viruses, new routes to interfering with other actors.
  2. Prevalence of very powerful technologies raises the risk of cataclysmic accidents

AI may produce or accelerate destructive multi-agent dynamics

(Main article: Argument for AI x-risk from destructive competition)

Summary:

  1. Competition can produce outcomes undesirable to all parties, through selection pressure for the success of any behavior that survives well.
  2. AI may increase the intensity of relevant competitions.

Large impacts suggest large risks

(Main article: Argument for AI x-risk from large impacts)

Summary:

  1. AI development will have very large impacts, relative to the scale of human society
  2. Large impacts generally raise the chance of large risks

Proponents of forms of this argument:

Powerful systems we don't understand suggest large risks

Summary:

  1. So far, humans have developed technology largely through understanding relevant mechanisms
  2. AI systems developed in 2024 are created via repeatedly modifying random systems in the direction of desired behaviors, rather than manually built, so the mechanisms the systems themselves ultimately use are not understood by human developers
  3. Systems whose mechanisms are not understood are more likely to produce undesired consequences than well-understood systems.

Key counterarguments:

  • This is an argument that risks from the technology are unusually high, but does not say anything about the scale of the effect so does not imply that the risk of risks to humanity as a whole are non-negligible.

See also

Notes

arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/list_of_arguments_that_ai_poses_an_xrisk.txt · Last modified: 2024/06/18 14:29 by katjagrace