arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start [2023/02/14 02:54]
katjagrace [I. If superhuman AI is developed, then at least some superhuman AI systems are likely to be 'goal-directed']
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:start [2023/09/26 07:22] (current)
katjagrace changed title to be less confusing as one of many arguments for malign agents controlling the future
Line 1: Line 1:
-====== Argument for AI x-risk from competent malign agents ======+====== Argument for AI x-risk from effective malign agents ======
  
 //This page is incomplete, under active work and may be updated soon.// //This page is incomplete, under active work and may be updated soon.//
Line 71: Line 71:
  
 If this argument is successful, then in conjunction with the view that [[will_superhuman_ai_be_created:start|superhuman AI will be developed]], it implies that humanity faces a large risk from artificial intelligence. This is evidence that it is a problem worthy of receiving resources, though this depends on the tractability of improving the situation (which depends in turn on the [[ai_timelines:start|timing of the problem]]), and on what other problems exist, none of which we have addressed here.  If this argument is successful, then in conjunction with the view that [[will_superhuman_ai_be_created:start|superhuman AI will be developed]], it implies that humanity faces a large risk from artificial intelligence. This is evidence that it is a problem worthy of receiving resources, though this depends on the tractability of improving the situation (which depends in turn on the [[ai_timelines:start|timing of the problem]]), and on what other problems exist, none of which we have addressed here. 
 +
 +===== Primary author =====
 +
 +Katja Grace
 +
arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/will_malign_ai_agents_control_the_future/argument_for_ai_x-risk_from_competent_malign_agents/start.1676343286.txt.gz · Last modified: 2023/02/14 02:54 by katjagrace