This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/02/12 02:45] katjagrace [Scenario: amoral AI agents control the future] |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2024/05/27 18:44] katjagrace [Scenarios and supporting arguments] |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Is AI an existential | + | ====== Is AI an existential |
//This page is under active work and may be updated soon.// | //This page is under active work and may be updated soon.// | ||
- | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling | + | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive |
===== Background ===== | ===== Background ===== | ||
Many [[arguments_for_ai_risk: | Many [[arguments_for_ai_risk: | ||
+ | |||
+ | For views of specific people working on AI, see [[arguments_for_ai_risk: | ||
Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | ||
- | ===== Scenarios and supporting arguments | + | ===== Argument |
- | Advanced AI could conceivably threaten humanity' | + | Advanced AI could conceivably threaten humanity' |
- | ==== Scenario: amoral AI agents control the future ==== | + | There are various [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity: |
- | //(Main article: [[arguments_for_ai_risk: | + | ==== Scenario: malign AI agents control the future ==== |
+ | |||
+ | //(Main article: [[arguments_for_ai_risk: | ||
// | // | ||
Line 28: | Line 32: | ||
This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | ||
- | Arguments for this scenario occurring include: | + | Various |
- | + | ||
- | * **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: | + | |
- | + | ||
- | * **AI will replace humans as most intelligent ' | + | |
- | + | ||
- | * **AI agents will cause humans to 'lose control' | + | |
- | + | ||
- | * **Argument for loss of control from extreme speed** \\ \\ **Summary**: Advancing AI will tend to produce very rapid changes, either because of feedback loops in automation of automation processes, or because automation tends to be faster than the human activity it replaces. Faster change reduces human ability to steer a situation, e.g. reviewing and understanding it, responding to problems as they appear, preparing. In the extreme, the pace of socially relevant events could become so fast as to exclude human participation. \\ \\ **Apparent status**: Heuristically suggestive, however the burden of proof should arguably be high for an implausible event such as the destruction of humanity. This argument also seems to support concern about a wide range of technologies, | + | |
- | In light of these arguments, this scenario | + | * **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: |
- | ==== Scenario: bad human actors | + | In light of [[https:// |
+ | ==== Scenario: | ||
Some people and collectives have goals whose fulfillment would be considered bad by most people. If advanced AI empowered those people disproportionately, | Some people and collectives have goals whose fulfillment would be considered bad by most people. If advanced AI empowered those people disproportionately, | ||
Line 53: | Line 50: | ||
==== Scenario: new AI cognitive labor is misdirected, | ==== Scenario: new AI cognitive labor is misdirected, | ||
- | (Main article: [[arguments_for_ai_risk: | + | //(Main article: [[arguments_for_ai_risk: |
Advanced AI could yield powerful destructive capabilities such as new weapons or hazardous technologies. As well as being used maliciously (see previous section) or forcing well-meaning actors into situations where unfortunate risks are hard to avoid, as with nuclear weapons (see next section), these raise the risk of cataclysmic accidents, just by being used in error. | Advanced AI could yield powerful destructive capabilities such as new weapons or hazardous technologies. As well as being used maliciously (see previous section) or forcing well-meaning actors into situations where unfortunate risks are hard to avoid, as with nuclear weapons (see next section), these raise the risk of cataclysmic accidents, just by being used in error. | ||
Line 68: | Line 65: | ||
- **Expert opinion expects non-negligible extinction risk**: in a large [[ai_timelines: | - **Expert opinion expects non-negligible extinction risk**: in a large [[ai_timelines: | ||
- | - **AI will have large impacts**, which is heuristically indicative of risk. // (Main article: [[arguments_for_ai_risk: | + | - **AI will have large impacts**, which is heuristically indicative of risk. // (Main article: [[arguments_for_ai_risk: |
Line 83: | Line 80: | ||
- | ===== Arguments for risk being worse than otherwise thought | + | ===== Arguments for risk being higher, if it exists |
These are arguments that, supposing there is some other reason to expect a risk at all, the risk may be larger or worse than expected. | These are arguments that, supposing there is some other reason to expect a risk at all, the risk may be larger or worse than expected. | ||
Line 92: | Line 89: | ||
====== See also ====== | ====== See also ====== | ||
- | * [[:arguments_for_ai_risk|AI x-risk portal]] | + | * [[arguments_for_ai_risk:start|AI x-risk portal]] |
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
- | * [[arguments_for_ai_risk: | + | * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]] |
+ | * [[arguments_for_ai_risk: | ||
====== Notes ====== | ====== Notes ====== | ||