Arguments for AI x-risk from destructive multi-agent dynamics
This page is incomplete, under active work and may be updated soon.
Argument for AI x-risk from destructive multi-agent dynamics claim that advanced artificial intelligence will produce or worsen situations where interacting agents are compelled to produce outcomes against their own interests, such as destructive arms races, and through this poses an existential risk to humanity.
Details
Examples of destructive multi-agent dynamics worsened by advanced AI
These scenarios contain changes in destructive multi-agent dynamics, not necessarily at a scale to risk humanity in any way:
Companies compete to sell products, but consumers do not have the attention to be very sensitive to exact prices, allowing companies leeway to treat their employees well, even when employees would work anyway. AI tools might allow much better product comparison, such that having the very lowest price is more important for survival, and many workers have a worse experience for quite small gains to the consumer.
AI agents competing to do tasks very cheaply might mean that the only way to do those tasks competitively is to use AI, and to use AI systems that do not have consciousness or
Companies are already compelled by competition to ignore bad side effects of their processes (e.g. perhaps they could voluntarily use less dangerous pesticides, but it would cost more and so perhaps reduce their business, leaving nobody better off, since the market would then take a different seller who uses the cheap pesticides.) If all companies have means to seek out more effective ways to make money at the expense of others, then competition might force all to use them. In the above example, there would be a pressure to use AI to find pesticides that work better, but also those where the harms are harder to trace to the company.
Counterarguments
If on net advanced AI worsens destructive multi-agent dynamics, this would seem to worsen the situation, but not obviously at a scale to pose an existential risk. We do not know of a case where the outcome is that bad.
Advanced AI could also make destructive multi-agent dynamics better, and it is not obvious that this is a smaller effect. Some examples of this:
The equilibria of a nuclear situation may change (for better or worse) if AI agents are available to one or both sides to respond to attacks.
More information about relative position in arms races can reduce racing, because a certain amount of uncertainty about who will win is needed for racing to be sufficiently incentivized.
Discussions of this argument elsewhere
Robin Hanson (2001)
This model seems to confirm the intuition that machine intelligence has Malthusian implications for population and wages.
Contributors
Primary author: Katja Grace
Other authors: Nathan Young, Josh Hart
Suggested citation: