This page is incomplete, under active work and may be updated soon.
Arguments for AI x-risk from catastrophic tools say that advanced artificial intelligence will forward the development of other dangerous technologies, thus posing an existential risk to humanity.
Summary:
There are several different reasons to expect AI to differentially worsen the outcomes of technological progress, or differentially empower destuctive users of technology.
(Main article: Argument for AI x-risk from acceleration of catastrophic technologies)
Summary:
Selected counterarguments:
(Main article: Argument for AI x-risk from vulnerable world)
Summary:
Selected counterarguments:
(Main article: Argument for AI x-risk from differential advancement of unpopular projects)
Summary:
Selected counterarguments:
(Main article: Argument for AI x-risk from catastrophic accidents)
Summary:
Dario Amodei (2023):
AI systems may facilitate extraordinary insights in broad swaths of many science and engineering disciplines. This will cause a revolution in technology and scientific discovery, but also greatly widen the set of people who can wreak havoc. In particular, I am concerned that AI systems could be misused on a grand scale in the domains of cybersecurity, nuclear technology, chemistry, and especially biology.
Holden Karnofsky (2016)
One of the main ways in which AI could be transformative is by enabling/accelerating the development of one or more enormously powerful technologies. In the wrong hands, this could make for an enormously powerful tool of authoritarians, terrorists, or other power-seeking individuals or institutions. I think the potential damage in such a scenario is nearly limitless (if transformative AI causes enough acceleration of a powerful enough technology), and could include long-lasting or even permanent effects on the world as a whole.
Yoshua Bengio (2024):
Some experts have also expressed concern that general-purpose AI could be used to support the development and malicious use of weapons, such as biological weapons. There is no strong evidence that current general-purpose AI systems pose this risk. For example, although current general-purpose AI systems demonstrate growing capabilities related to biology, the limited studies available do not provide clear evidence that current systems can ‘uplift’ malicious actors to obtain biological pathogens more easily than could be done using the internet. However, future large-scale threats have scarcely been assessed and are hard to rule out. -
Primary author: Katja Grace
Other authors: Nathan Young, Josh Hart
Suggested citation:
Grace, K., Young, N., Hart, J., (2024), Argument for AI x-risk from catastrophic tools, AI Impacts Wiki, https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_catastrophic_tools