This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:36] katjagrace ↷ Links adapted because of a move operation |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43] katjagrace [Details] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Will malign AI agents control the future? ====== | ====== Will malign AI agents control the future? ====== | ||
- | This appears to be the most discussed extinction scenario. In it: | + | //This page is under active work and may be updated soon.// |
+ | |||
+ | The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling. | ||
+ | |||
+ | ===== Scenario ===== | ||
+ | |||
+ | This appears to be the most discussed | ||
- AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making. | - AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making. | ||
Line 9: | Line 15: | ||
This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. | ||
- | Arguments | + | ===== Arguments |
+ | |||
+ | Arguments that this scenario | ||
* **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: | * **AI developments will produce powerful agents with undesirable goals** \\ \\ //(Main article: [[arguments_for_ai_risk: |