User Tools

Site Tools


arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/02/14 01:18]
katjagrace [Scenario: malign AI agents control the future]
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/09/26 17:01] (current)
jeffreyheninger
Line 1: Line 1:
-====== Is AI an existential threat to humanity? ======+====== Is AI an existential risk to humanity? ======
  
 //This page is under active work and may be updated soon.//  //This page is under active work and may be updated soon.// 
  
-The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling evidence.+The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive evidence.
  
 ===== Background ===== ===== Background =====
  
 Many [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|thinkers]] believe advanced [[clarifying_concepts:artificial_intelligence|artificial intelligence]] (AI) poses a large threat to humanity's long term survival or flourishing. Here we review evidence. Many [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|thinkers]] believe advanced [[clarifying_concepts:artificial_intelligence|artificial intelligence]] (AI) poses a large threat to humanity's long term survival or flourishing. Here we review evidence.
 +
 +For views of specific people working on AI, see [[arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai|this page]].
  
 Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation.
Line 30: Line 32:
 Various [[https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start#arguments|arguments]] are made for this scenario. The most prominent appears to be: Various [[https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start#arguments|arguments]] are made for this scenario. The most prominent appears to be:
  
-  * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high. +  * **AI developments will produce powerful agents with undesirable goals** \\  \\ //(Main article: [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start|Argument for AI X-risk from competent malign agents]])// \\  \\ **Summary**: At least some advanced AI systems will probably be 'goal-oriented', a powerful force in the world, and their goals will probably be bad by human lights. Powerful goal-oriented agents tend to achieve their goals. \\  \\ **Apparent status**: This seems to us the most suggestive argument, though not watertight. This seems prima facie plausible, but destroying everything is a very implausible event, so the burden of proof is high. 
  
 In light of [[https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start#arguments|arguments]], this scenario seems plausible but not guaranteed.  In light of [[https://wiki.aiimpacts.org/doku.php?id=arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:start#arguments|arguments]], this scenario seems plausible but not guaranteed. 
-==== Scenario: bad human actors are empowered by cheap AI cognitive labor ====+==== Scenario: AI empowers bad human actors ====
  
 Some people and collectives have goals whose fulfillment would be considered bad by most people. If advanced AI empowered those people disproportionately, this could be destructive. This could happen by bad luck, or because the situation systematically advantages unpopular values. Some people and collectives have goals whose fulfillment would be considered bad by most people. If advanced AI empowered those people disproportionately, this could be destructive. This could happen by bad luck, or because the situation systematically advantages unpopular values.
Line 89: Line 91:
   * [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|List of sources arguing for existential risk from AI]]   * [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|List of sources arguing for existential risk from AI]]
   * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]   * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]
 +  * [[arguments_for_ai_risk:list_of_possible_risks_from_ai|List of possible risks from AI]]
  
 ====== Notes ====== ====== Notes ======
  
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/start.1676337495.txt.gz · Last modified: 2023/02/14 01:18 by katjagrace