This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:36] katjagrace ↷ Links adapted because of a move operation |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43] katjagrace |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Will malign AI agents control the future? ====== | ====== Will malign AI agents control the future? ====== | ||
- | This appears to be the most discussed extinction scenario. In it: | + | //This page is under active work and may be updated soon.// |
+ | |||
+ | The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling. | ||
+ | |||
+ | ===== Details ===== | ||
+ | |||
+ | This appears to be the most discussed | ||
- AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making. | - AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making. |