User Tools

Site Tools


arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/05/21 01:32]
rickkorzekwa [Background]
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start [2023/09/26 17:01] (current)
jeffreyheninger
Line 3: Line 3:
 //This page is under active work and may be updated soon.//  //This page is under active work and may be updated soon.// 
  
-The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be strongly compelling evidence.+The balance of evidence appears to suggest that AI poses a substantial existential risk, though none of the arguments that we know of appear be conclusive evidence.
  
 ===== Background ===== ===== Background =====
Line 9: Line 9:
 Many [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|thinkers]] believe advanced [[clarifying_concepts:artificial_intelligence|artificial intelligence]] (AI) poses a large threat to humanity's long term survival or flourishing. Here we review evidence. Many [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|thinkers]] believe advanced [[clarifying_concepts:artificial_intelligence|artificial intelligence]] (AI) poses a large threat to humanity's long term survival or flourishing. Here we review evidence.
  
-For views of specific people working on AI, see [[arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai|this page]]+For views of specific people working on AI, see [[arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai|this page]].
  
 Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation. Note that arguments included here are not intended to be straightforwardly independent lines of evidence. They may instead represent different ways of conceptualizing and reasoning about the same underlying situation.
Line 91: Line 91:
   * [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|List of sources arguing for existential risk from AI]]   * [[arguments_for_ai_risk:list_of_sources_arguing_for_existential_risk_from_ai|List of sources arguing for existential risk from AI]]
   * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]   * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]
 +  * [[arguments_for_ai_risk:list_of_possible_risks_from_ai|List of possible risks from AI]]
  
 ====== Notes ====== ====== Notes ======
  
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/start.1684632763.txt.gz · Last modified: 2023/05/21 01:32 by rickkorzekwa