arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43]
katjagrace
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43] (current)
katjagrace [Details]
Line 5: Line 5:
 The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling. The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling.
  
-===== Details =====+===== Scenario =====
  
 This appears to be the most discussed [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start#scenarios_and_supporting_arguments|AI extinction scenario]]. In it:  This appears to be the most discussed [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start#scenarios_and_supporting_arguments|AI extinction scenario]]. In it: 
Line 15: Line 15:
 This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc.
  
-Arguments for this scenario occurring include:+===== Arguments ===== 
 + 
 +Arguments that this scenario will occur include:
  
   * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high.    * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high. 
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/will_malign_ai_agents_control_the_future/start.1676176980.txt.gz · Last modified: 2023/02/12 04:43 by katjagrace