This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start [2023/02/14 02:54] katjagrace [I. If superhuman AI is developed, then at least some superhuman AI systems are likely to be 'goal-directed'] |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start [2023/09/26 07:22] (current) katjagrace changed title to be less confusing as one of many arguments for malign agents controlling the future |
||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Argument for AI x-risk from competent | + | ====== Argument for AI x-risk from effective |
//This page is incomplete, under active work and may be updated soon.// | //This page is incomplete, under active work and may be updated soon.// | ||
Line 71: | Line 71: | ||
If this argument is successful, then in conjunction with the view that [[will_superhuman_ai_be_created: | If this argument is successful, then in conjunction with the view that [[will_superhuman_ai_be_created: | ||
+ | |||
+ | ===== Primary author ===== | ||
+ | |||
+ | Katja Grace | ||
+ |