arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 02:47]
katjagrace created
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start [2023/02/12 04:43] (current)
katjagrace [Details]
Line 1: Line 1:
-====== Will amoral AI agents control the future? ======+====== Will malign AI agents control the future? ======
  
-This appears to be the most discussed extinction scenario. In it: +//This page is under active work and may be updated soon.//  
 + 
 +The balance of evidence suggests a substantial risk of malign AI agents controlling the future, though none of the arguments that we know of appears to be be strongly compelling. 
 + 
 +===== Scenario ===== 
 + 
 +This appears to be the most discussed [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start#scenarios_and_supporting_arguments|AI extinction scenario]]. In it: 
  
   - AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making.   - AI systems are created which, a) have goals, and b) are each more capable than a human at many economically valuable tasks, including strategic decision making.
Line 9: Line 15:
 This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc. This scenario includes sub-scenarios where the above process happens fast or slow, or involves different kinds of agents, or different specific routes, etc.
  
-Arguments for this scenario occurring include:+===== Arguments ===== 
 + 
 +Arguments that this scenario will occur include:
  
-  * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:will_amoral_ai_agents_control_the_future:Argument for AI X-risk from competent malign agents|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high. +  * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high. 
  
-  * **AI will replace humans as most intelligent 'species'** \\  \\ //(Main article: [[arguments_for_ai_risk:will_amoral_ai_agents_control_the_future:Argument for AI x-risk from most intelligent species|Argument for AI x-risk from most intelligent species]])// \\  \\ **Summary**: Humans' dominance over other species in controlling the world is due primarily to our superior cognitive abilities. If another 'species' with better cognitive abilities appeared, we should then expect humans to lose control over the future and therefore for the future to lose its value.  \\  \\ **Apparent status**: Somewhat suggestive, though doesn't appear to be valid, since intelligence in animals doesn't appear to generally relate to dominance. A valid version may be possible to construct.+  * **AI will replace humans as most intelligent 'species'** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_most_intelligent_species|Argument for AI x-risk from most intelligent species]])// \\  \\ **Summary**: Humans' dominance over other species in controlling the world is due primarily to our superior cognitive abilities. If another 'species' with better cognitive abilities appeared, we should then expect humans to lose control over the future and therefore for the future to lose its value.  \\  \\ **Apparent status**: Somewhat suggestive, though doesn't appear to be valid, since intelligence in animals doesn't appear to generally relate to dominance. A valid version may be possible to construct.
  
   * **AI agents will cause humans to 'lose control'** \\  \\ **Summary**: AI will ultimately be much faster and more competent than humans, so either, a) must make most decisions because waiting for humans will be so costly, b) will make decisions if it wants, since humans will be so relatively powerless, due to their intellectual inferiority. Losing control of the future isn't necessarily bad, but is prima facie a very bad sign. \\  \\ **Apparent status**: Suggestive, but as stated does not appear to be valid. For instance, humans do not generally seem to become disempowered by possession of software that is far superior to them.   * **AI agents will cause humans to 'lose control'** \\  \\ **Summary**: AI will ultimately be much faster and more competent than humans, so either, a) must make most decisions because waiting for humans will be so costly, b) will make decisions if it wants, since humans will be so relatively powerless, due to their intellectual inferiority. Losing control of the future isn't necessarily bad, but is prima facie a very bad sign. \\  \\ **Apparent status**: Suggestive, but as stated does not appear to be valid. For instance, humans do not generally seem to become disempowered by possession of software that is far superior to them.
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/will_malign_ai_agents_control_the_future/start.1676170042.txt.gz · Last modified: 2023/02/12 02:47 by katjagrace