This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/09/18 21:59] harlanstewart |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2024/05/27 18:49] katjagrace [Arguments for risk being higher, if it exists] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Is AI an existential risk to humanity? ====== | ====== Is AI an existential risk to humanity? ====== | ||
- | /*testing comment functionality*/ | + | |
//This page is under active work and may be updated soon.// | //This page is under active work and may be updated soon.// | ||
Line 13: | Line 13: | ||
Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | ||
- | ===== Scenarios and supporting arguments | + | ===== Argument |
- | + | ||
- | Advanced AI could conceivably threaten humanity' | + | |
+ | The [[arguments_for_ai_risk: | ||
==== Scenario: malign AI agents control the future ==== | ==== Scenario: malign AI agents control the future ==== | ||
Line 84: | Line 83: | ||
- AI performance may increase very fast due to inherent propensities to discontinuity | - AI performance may increase very fast due to inherent propensities to discontinuity | ||
- AI performance may increase very fast once AI contributes to AI progress, due to a feedback dynamic (' | - AI performance may increase very fast once AI contributes to AI progress, due to a feedback dynamic (' | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | There are various other [[arguments_for_ai_risk: | ||
====== See also ====== | ====== See also ====== | ||
Line 91: | Line 95: | ||
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
* [[will_superhuman_ai_be_created: | * [[will_superhuman_ai_be_created: | ||
+ | * [[arguments_for_ai_risk: | ||
====== Notes ====== | ====== Notes ====== | ||