Argument for AI x-risk from large impacts
Published 28 September, 2021; last updated 19 November, 2021
This page is incomplete, under active work and may be updated soon.
The argument for AI x-risk from large impacts an argument supporting the claim that advanced artificial intelligence poses an existential risk to humanity.
We understand the argument as follows:
The creation of advanced AI is likely to have a large impact on the world at the time, relative to other developments.
The creation of advanced AI is more likely to impact humanity’s long term trajectory than other developments (from 1)
Developments with large impacts on the future are more likely to be worth influencing, all things equal, than developments with smaller impacts
One main way things might not be equal is that some developments may be easier to influence than others, but AI currently looks tractable to influence.
The creation of advanced AI is especially likely to be worth trying to influence, among other developments
This is a neither exhaustive nor exclusive list of counterarguments and complications that we know of:
This is a heuristic argument that isn’t strictly true, and it isn’t clear how true it is.
It isn’t clear what a development is. For instance, is global agriculture surviving for another year a development? Is ‘modern science’ a development? Why think of advanced AI as a development, rather than thinking of a particular algorithmic tweak as a development? If we lump more things together, the developments are bigger, so it isn’t clear what it means to say that one is especially big.
It isn’t clear what counterfactuals are being used. Is the claim that AI will have a large impact relative to a world without any AI? To a world that is similar to the current world forever? Why is the difference between AI and either of those (highly unlikely) scenarios relevant to the value of influencing the development and deployment of AI?
The heuristic connection between the scale of an action’s possible impact and the stakes of influencing it seems perhaps more useful for ruling out fundamentally unworthy issues than comparing minor impacts on very large issues. For instance, one can use it to confidently tell that it isn’t worth spending half an hour deciding which flavor of jam to buy, whereas it seems much worse for comparing ‘working on functional institutions’ to ‘working on AI’, where both are vaguely ‘big’ and your impact is likely to be ‘tiny’.
How easy different ‘developments’ are to influence is as big an issue as how much is at stake in influencing them, so is better treated quantitatively, not as a possible qualitative defeater.
It isn’t clear that ‘influencing developments’ is a good way of viewing possibly high-value actions. For instance, it excludes many actions that are not in response to the actions of others.
We have seen discussion of this in the following places. The name is from Ngo.
Richard Ngo describes this as follows:
Argument from large impacts
. Even if we’re very uncertain about what AGI development and deployment will look like, it seems likely that AGI will have a very large impact on the world in general, and that further investigation into how to direct that impact could prove very valuable.
Weak version: development of AGI will be at least as big an economic jump as the industrial revolution, and therefore affect the trajectory of the long-term future. See Ben Garfinkel’s talk at EA Global London 2018
. Ben noted that to consider work on AI safety important, we also need to believe the additional claim that there are feasible ways to positively influence the long-term effects of AI development – something which may not have been true for the industrial revolution. (Personally my guess is that since AI development will happen more quickly than the industrial revolution, power will be more concentrated during the transition period, and so influencing its long-term effects will be more tractable.)
Strong version: development of AGI will make humans the second most intelligent species on the planet. Given that it was our intelligence which allowed us to control the world to the large extent that we do, we should expect that entities which are much more intelligent than us will end up controlling our future, unless there are reliable and feasible ways to prevent it. So far we have not discovered any.
(We treat 2 as a separate argument, ‘Argument from most intelligent species‘.)