This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/05/21 01:32] rickkorzekwa [Background] |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2024/05/27 18:48] katjagrace [Argument] |
||
---|---|---|---|
Line 3: | Line 3: | ||
//This page is under active work and may be updated soon.// | //This page is under active work and may be updated soon.// | ||
- | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling | + | The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive |
===== Background ===== | ===== Background ===== | ||
Line 9: | Line 9: | ||
Many [[arguments_for_ai_risk: | Many [[arguments_for_ai_risk: | ||
- | For views of specific people working on AI, see [[arguments_for_ai_risk: | + | For views of specific people working on AI, see [[arguments_for_ai_risk: |
Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. | ||
- | ===== Scenarios and supporting arguments | + | ===== Argument |
- | Advanced AI could conceivably threaten humanity' | + | The [[arguments_for_ai_risk: |
+ | |||
+ | Advanced AI could conceivably threaten humanity' | ||
+ | |||
+ | There are various [[arguments_for_ai_risk: | ||
==== Scenario: malign AI agents control the future ==== | ==== Scenario: malign AI agents control the future ==== | ||
Line 91: | Line 95: | ||
* [[arguments_for_ai_risk: | * [[arguments_for_ai_risk: | ||
* [[will_superhuman_ai_be_created: | * [[will_superhuman_ai_be_created: | ||
+ | * [[arguments_for_ai_risk: | ||
====== Notes ====== | ====== Notes ====== | ||