arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_catastrophic_tools

Arguments for AI x-risk from catastrophic tools

This page is incomplete, under active work and may be updated soon.

Arguments for AI x-risk from catastrophic tools say that advanced artificial intelligence will forward the development of other dangerous technologies, thus posing an existential risk to humanity.

Details

Argument

Summary:

  1. There appear to be non-AI technologies that would pose a risk to humanity if developed
  2. AI will markedly increase the speed of development of harmful non-AI technologies
  3. AI will markedly increase the breadth of access to harmful non-AI technologies
  4. Therefore AI development poses an existential risk to humanity

Versions

There are several different reasons to expect AI to differentially worsen the outcomes of technological progress, or differentially empower destuctive users of technology.

Accelerated discovery of catastrophic technologies

(Main article: Argument for AI x-risk from acceleration of catastrophic technologies)

Summary:

  1. There are technologies we foresee posing a risk to humanity, as nuclear weapons have. For instance, molecular nanotechnology and engineered plagues.
  2. Given some concrete examples, there are likely other technologies that could pose a risk to humanity. For instance, powerful weapons, computer attacks, and human manipulation vectors.
  3. AI will likely hasten the production of all of those technologies.
  1. Thus AI will hasten risks to humanity
  2. Risks to humanity are more threatening if faced sooner

Selected counterarguments:

  • If AI contributes more general cognitive labor, it should also hasten the processes that mitigate such risks.

AI could trigger 'Vulnerable world'

(Main article: Argument for AI x-risk from vulnerable world)

Summary:

  1. Technologies may be possible that grant extreme destructive capabilities to small groups without granting other people defensive capabilities
  2. Some people and collectives would like to destroy humanity, or would risk that for other aims
  3. Advanced AI may be such a technology, or may produce such technologies

Selected counterarguments:

  • These arguments appear to raise the chance of such a scenario, but it is not clear how much
  • Many technologies contribute to accelerating technological progress; it's not clear AI is worse than others

AI differentially advances unpopular projects

(Main article: Argument for AI x-risk from differential advancement of unpopular projects)

Summary:

  1. Until now, large projects have required the labor of many people
  2. This disadvantages projects most people would not work on, and more strongly disadvantages projects others would work to end, e.g. highly destructive or illegal projects.
  3. Projects to cause great destruction (e.g. genocide or destroying humanity) are in both of these categories, thus have been disadvantaged until now.
  4. AI will allow large projects to proceed with minimal human labor, and thus with the cooperation of very few people
  5. Thus AI will make projects to destroy the world easier, raising the chance of one succeeding

Selected counterarguments:

  • This argument poses an effect, but says nothing about its strength

Powerful technologies raise the chance of catastrophic accidents

(Main article: Argument for AI x-risk from catastrophic accidents)

Summary:

  1. Advanced AI will accelerate the discovery of powerful technologies
  2. Prevalence of very powerful technologies raises the risk of catastrophic accidents

Discussions of this argument elsewhere

Dario Amodei (2023):

AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology.

Holden Karnofsky (2016)

One of the main ways in which AI could be transformative is by enabling/accelerating the development of one or more enormously powerful technologies. In the wrong hands, this could make for an enormously powerful tool of authoritarians, terrorists, or other power-seeking individuals or institutions. I think the potential damage in such a scenario is nearly limitless (if transformative AI causes enough acceleration of a powerful enough technology), and could include long-lasting or even permanent effects on the world as a whole.

Yoshua Bengio (2024):

Some experts have also expressed concern that general-purpose AI could be used to support the development and malicious use of weapons, such as biological weapons. There is no strong evidence that current general-purpose AI systems pose this risk. For example, although current general-purpose AI systems demonstrate growing capabilities related to biology, the limited studies available do not provide clear evidence that current systems can ‘uplift’ malicious actors to obtain biological pathogens more easily than could be done using the internet. However, future large-scale threats have scarcely been assessed and are hard to rule out. -

Contributors

Primary author: Katja Grace

Other authors: Nathan Young, Josh Hart

Suggested citation:

Grace, K., Young, N., Hart, J., (2024), Argument for AI x-risk from catastrophic tools, AI Impacts Wiki, https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_catastrophic_tools
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/will_malign_ai_agents_control_the_future/argument_for_ai_x-risk_from_catastrophic_tools.txt · Last modified: 2024/08/09 01:11 by katjagrace