This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/05/30 01:03] katjagrace [Background] |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2024/05/27 18:41] katjagrace [Scenarios and supporting arguments] |
||
---|---|---|---|
Line 3: | Line 3: | ||
//This page is under active work and may be updated soon.// | //This page is under active work and may be updated soon.// | ||
- | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling | + | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive |
===== Background ===== | ===== Background ===== | ||
Line 15: | Line 15: | ||
===== Scenarios and supporting arguments ===== | ===== Scenarios and supporting arguments ===== | ||
- | Advanced AI could conceivably threaten humanity' | + | Advanced AI could conceivably threaten humanity' |
+ | |||
+ | There are various [[arguments_for_ai_risk: | ||
==== Scenario: malign AI agents control the future ==== | ==== Scenario: malign AI agents control the future ==== | ||
Line 91: | Line 93: | ||
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
* [[will_superhuman_ai_be_created: | * [[will_superhuman_ai_be_created: | ||
+ | * [[arguments_for_ai_risk: | ||
====== Notes ====== | ====== Notes ====== | ||