This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/03/09 04:54] katjagrace |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2024/05/28 19:23] katjagrace ↷ Links adapted because of a move operation |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Is AI an existential risk to humanity? ====== | ====== Is AI an existential risk to humanity? ====== | ||
- | //This page is under active work and may be updated soon.// | + | //This page is under active work and may currently |
- | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling | + | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive |
===== Background ===== | ===== Background ===== | ||
Many [[arguments_for_ai_risk: | Many [[arguments_for_ai_risk: | ||
+ | |||
+ | For views of specific people working on AI, see [[arguments_for_ai_risk: | ||
Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | ||
- | ===== Scenarios | + | ===== Arguments ===== |
+ | |||
+ | //(Main article: [[arguments_for_ai_risk: | ||
+ | // | ||
+ | |||
+ | [[arguments_for_ai_risk: | ||
+ | |||
+ | - Some advanced AI systems will very likely be ' | ||
+ | - The aggregate goals of these systems may be bad. (There are reasons to think this.) | ||
+ | - Such systems will likely have the power to achieve their goals even against the will of humans | ||
+ | - Thus, there is some chance that the future will proceed in opposition to long-run human welfare, because these advanced AI systems will succeed in their (bad) goals | ||
+ | ===== Other arguments ===== | ||
- | Advanced | + | Further arguments for AI posing an existential risk to humanity |
==== Scenario: malign AI agents control the future ==== | ==== Scenario: malign AI agents control the future ==== | ||
- | //(Main article: [[arguments_for_ai_risk: | + | //(Main article: [[arguments_for_ai_risk: |
// | // | ||
- | This appears to be the most discussed extinction | + | Arguments that this scenario |
- | | + | |
- | | + | |
- | - The AI systems do not want the same things as humans, so will bring about a future that humans would disprefer | + | |
- | This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | + | * **AI agents will cause humans to 'lose control' |
- | Various [[https://wiki.aiimpacts.org/doku.php? | + | * **Argument for loss of control from extreme speed** \\ \\ **Summary**: Advancing AI will tend to produce very rapid changes, either because of feedback loops in automation of automation processes, or because automation tends to be faster than the human activity it replaces. Faster change reduces human ability to steer a situation, e.g. reviewing and understanding it, responding to problems as they appear, preparing. In the extreme, the pace of socially relevant events could become so fast as to exclude human participation. \\ \\ **Apparent status**: Heuristically suggestive, however the burden of proof should arguably be high for an implausible event such as the destruction of humanity. This argument also seems to support concern about a wide range of technologies, |
- | * **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: | + | In light of these arguments, this scenario |
- | In light of [[https:// | ||
==== Scenario: AI empowers bad human actors ==== | ==== Scenario: AI empowers bad human actors ==== | ||
Line 56: | Line 66: | ||
Competition can produce outcomes undesirable to all parties, through selection pressure for the success of any behavior that survives well. AI may increase the intensity of relevant competitions. | Competition can produce outcomes undesirable to all parties, through selection pressure for the success of any behavior that survives well. AI may increase the intensity of relevant competitions. | ||
- | ===== General evidence | + | ==== General evidence ==== |
This is evidence for existential risk from AI which doesn' | This is evidence for existential risk from AI which doesn' | ||
Line 82: | Line 92: | ||
- AI performance may increase very fast due to inherent propensities to discontinuity | - AI performance may increase very fast due to inherent propensities to discontinuity | ||
- AI performance may increase very fast once AI contributes to AI progress, due to a feedback dynamic (' | - AI performance may increase very fast once AI contributes to AI progress, due to a feedback dynamic (' | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | There are various other [[arguments_for_ai_risk: | ||
+ | |||
+ | ====== Conclusion ====== | ||
+ | |||
+ | In light of [[arguments_for_ai_risk: | ||
====== See also ====== | ====== See also ====== | ||
Line 89: | Line 108: | ||
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
* [[will_superhuman_ai_be_created: | * [[will_superhuman_ai_be_created: | ||
+ | * [[arguments_for_ai_risk: | ||
====== Notes ====== | ====== Notes ====== | ||