This shows you the differences between two versions of the page.
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:will_advanced_ai_be_agentic:start [2023/02/14 01:49] katjagrace created |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:will_advanced_ai_be_agentic:start [2023/09/20 18:22] (current) katjagrace |
||
---|---|---|---|
Line 2: | Line 2: | ||
//This page is a stub. It is likely to be expanded upon soon.// | //This page is a stub. It is likely to be expanded upon soon.// | ||
+ | |||
+ | Reasons to expect that some superhuman AI systems will be goal-directed include: | ||
+ | |||
+ | - **Some goal-directed behavior is likely to be [[arguments_for_ai_risk: | ||
+ | - **Goal-directed entities [[arguments_for_ai_risk: | ||
+ | - **‘[[agency: | ||
+ |