This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:40] katjagrace |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43] katjagrace [Details] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Will malign AI agents control the future? ====== | ====== Will malign AI agents control the future? ====== | ||
+ | //This page is under active work and may be updated soon.// | ||
+ | The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling. | ||
+ | |||
+ | ===== Scenario ===== | ||
This appears to be the most discussed [[arguments_for_ai_risk: | This appears to be the most discussed [[arguments_for_ai_risk: | ||
Line 11: | Line 15: | ||
This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | ||
- | Arguments | + | ===== Arguments |
+ | |||
+ | Arguments that this scenario | ||
* **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: | * **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: |