arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_loss_of_control_through_speed

Argument for AI x-risk from loss of control through speed

This page is incomplete, under active work and may be updated soon.

The argument for AI x-risk from loss of control through speed is an argument that advanced artificial intelligence poses an existential risk to humanity by potentially increasing the speed of events beyond human ability to exert meaningful control.

Details

Argument

Car stopping distances contain both a reaction distance and a stopping distance - even perfect brakes are't enough if the situation doesn't give the human time to react. 1)

Summary:

  1. Advancing AI may tend to produce very rapid changes, in available AI technology, other technologies, and society
  2. Faster changes reduce the ability for humans to exert meaningful control over what happens, because humans need time to make non-random choices. Control over what happens in particular includes normal mechanisms for maintaining safety, such as noticing problems when they appear and responding before they become large, adjusting course, foreseeing problems before they arise and preparing, understanding situations well before choosing a response.
  3. The pace of relevant events could become so fast as to allow for negligible relevant human involvement
  4. If humans are not ongoingly involved in choosing the future, the future is likely to be bad by human lights

Different versions of the argument

This argument may work with several versions of speed:

  1. AI systems will likely act much faster than the human activity they will replace: this is a form of 'Argument from loss of control via superiority'
  2. New AI systems will be developed much faster than similarly impactful technologies previously have
  3. AI systems will produce new non-AI technologies (e.g. weapons) much faster than similarly impactful non-AI technologies previously
  4. Technological changes will lead to changes in society much faster than previously

Key counterarguments

  • It is unclear that AI technology will generally accelerate progress much more than other classes of technology
  • It is unclear that intentional steering plays a large role in civilization-level survival
  • AI technologies will likely speed up processes for detecting and responding to risks, as well as processes bringing new risks faster
  • The burden of proof should be high for expecting an extremely rare event such as the destruction of humanity (as opposed to smaller scales of catastrophe)
  • This argument also seems to support concern about a wide range of technologies, and it is unclear that it is predictive of which are seemingly worth worrying about.

Discussion of this argument elsewhere

Joe Carlsmith (2021)

By contrast, if there is very little calendar time between the first significant warning shots and the development of highly capable, strategically-aware agents, there will be less time for the evidence that warning shots provide to be reflected in the world’s AI-related research and decision-making. This is one of the worrying features of scenarios where frontier capabilities escalate very rapidly

Contributors

Primary author: Katja Grace

Other authors: Nathan Young, Josh Hart

Suggested citation:

Grace, K., Young, N., Hart, J., (2024), Argument for AI x-risk from loss of control through speed, AI Impacts Wiki, https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_loss_of_control_through_speed

Notes

arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_loss_of_control_through_speed.txt · Last modified: 2024/08/09 01:14 by katjagrace