<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki arguments_for_ai_risk</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-04-30T02:52:18+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>Incentives to create AI systems known to pose extinction risks</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/incentives_to_create_ai_systems_known_to_pose_extinction_risks?rev=1686260780&amp;do=diff"/>
        <published>2023-06-08T21:46:20+00:00</published>
        <updated>2023-06-08T21:46:20+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/incentives_to_create_ai_systems_known_to_pose_extinction_risks?rev=1686260780&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -49,9 +49,9 @@
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;ol&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of using an AI lawyer system for $100, or a human lawyer for $10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of using an AI lawyer system for \$100, or a human lawyer for \$10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of deploying the largest scale model to date, or trying to call off the project. They believe that at some scale, a model will become an existential threat to humanity. However they are very unsure at what scale, and estimate that the model in front of them only has a 1% chance of being the dangerous one. They value the future of humanity a lot, but not ten times more than their career, and calling off the project would be a huge hit, for only 1% of the future of humanity.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;/ol&amp;gt;
  &amp;lt;/HTML&amp;gt;
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -49,9 +49,9 @@
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;ol&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of using an AI lawyer system for $100, or a human lawyer for $10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of using an AI lawyer system for \$100, or a human lawyer for \$10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;A person faces the choice of deploying the largest scale model to date, or trying to call off the project. They believe that at some scale, a model will become an existential threat to humanity. However they are very unsure at what scale, and estimate that the model in front of them only has a 1% chance of being the dangerous one. They value the future of humanity a lot, but not ten times more than their career, and calling off the project would be a huge hit, for only 1% of the future of humanity.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;/ol&amp;gt;
  &amp;lt;/HTML&amp;gt;
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Interviews on plausibility of AI safety by default</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/interviews_on_plausibility_of_ai_safety_by_default?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/interviews_on_plausibility_of_ai_safety_by_default?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -1 +1,42 @@
+ ====== Interviews on plausibility of AI safety by default ======
+ 
+ // Published 02 April, 2020; last updated 15 September, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This is a list of interviews on the plausibility of AI safety by default.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Background =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_ernie_davis&amp;quot;&amp;gt;Conversation with Ernie Davis&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_rohin_shah&amp;quot;&amp;gt;Conversation with Rohin Shah&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_paul_christiano&amp;quot;&amp;gt;Conversation with Paul Christiano&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_adam_gleave&amp;quot;&amp;gt;Conversation with Adam Gleave&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_robin_hanson&amp;quot;&amp;gt;Conversation with Robin Hanson&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,42 @@
+ ====== Interviews on plausibility of AI safety by default ======
+ 
+ // Published 02 April, 2020; last updated 15 September, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This is a list of interviews on the plausibility of AI safety by default.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Background =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;AI Impacts conducted interviews with several thinkers on AI safety in 2019 as part of a project exploring arguments for expecting advanced AI to be safe by default. The interviews also covered other AI safety topics, such as timelines to advanced AI, the likelihood of current techniques leading to AGI, and currently promising AI safety interventions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_ernie_davis&amp;quot;&amp;gt;Conversation with Ernie Davis&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_rohin_shah&amp;quot;&amp;gt;Conversation with Rohin Shah&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_paul_christiano&amp;quot;&amp;gt;Conversation with Paul Christiano&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_adam_gleave&amp;quot;&amp;gt;Conversation with Adam Gleave&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;/doku.php?id=conversation_notes:conversation_with_robin_hanson&amp;quot;&amp;gt;Conversation with Robin Hanson&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>List of possible risks from AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_possible_risks_from_ai?rev=1695747658&amp;do=diff"/>
        <published>2023-09-26T17:00:58+00:00</published>
        <updated>2023-09-26T17:00:58+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_possible_risks_from_ai?rev=1695747658&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -1,5 +1,5 @@
- ====== List of Possible Risks from AI ======
+ ====== List of possible risks from AI ======
  
  //Created 26 September, 2023. Last updated 26 September, 2023.//
  
  Many people are concerned about AI for reasons other than existential risk. Here we list some of these other concerns.((Many of these concerns were raised in response to a tweet by Katja Grace: [[https://twitter.com/KatjaGrace/status/1693873324747862370|https://twitter.com/KatjaGrace/status/1693873324747862370]].))

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,5 +1,5 @@
- ====== List of Possible Risks from AI ======
+ ====== List of possible risks from AI ======
  
  //Created 26 September, 2023. Last updated 26 September, 2023.//
  
  Many people are concerned about AI for reasons other than existential risk. Here we list some of these other concerns.((Many of these concerns were raised in response to a tweet by Katja Grace: [[https://twitter.com/KatjaGrace/status/1693873324747862370|https://twitter.com/KatjaGrace/status/1693873324747862370]].))

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>List of sources arguing against existential risk from AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_sources_arguing_against_existential_risk_from_ai?rev=1686955194&amp;do=diff"/>
        <published>2023-06-16T22:39:54+00:00</published>
        <updated>2023-06-16T22:39:54+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_sources_arguing_against_existential_risk_from_ai?rev=1686955194&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -14,10 +14,9 @@
  &amp;lt;/HTML&amp;gt;
  
  
  
- ===== 
- List =====
+ ===== List =====
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Cegłowski, Maciej. “Superintelligence: The Idea That Eats Smart People.”&amp;lt;/strong&amp;gt; &amp;lt;em&amp;gt;Idle Words&amp;lt;/em&amp;gt; (blog). Accessed December 9, 2021. &amp;lt;a href=&amp;quot;https://idlewords.com/talks/superintelligence.htm&amp;quot;&amp;gt;https://idlewords.com/talks/superintelligence.htm&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -14,10 +14,9 @@
  &amp;lt;/HTML&amp;gt;
  
  
  
- ===== 
- List =====
+ ===== List =====
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Cegłowski, Maciej. “Superintelligence: The Idea That Eats Smart People.”&amp;lt;/strong&amp;gt; &amp;lt;em&amp;gt;Idle Words&amp;lt;/em&amp;gt; (blog). Accessed December 9, 2021. &amp;lt;a href=&amp;quot;https://idlewords.com/talks/superintelligence.htm&amp;quot;&amp;gt;https://idlewords.com/talks/superintelligence.htm&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>List of sources arguing for existential risk from AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai?rev=1686260360&amp;do=diff"/>
        <published>2023-06-08T21:39:20+00:00</published>
        <updated>2023-06-08T21:39:20+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_sources_arguing_for_existential_risk_from_ai?rev=1686260360&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -118,25 +118,16 @@
  
  ===== See also =====
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;ul&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;a href=&amp;quot;/doku.php?id=arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai&amp;quot;&amp;gt;List of sources arguing against existential risk from AI&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;a href=&amp;quot;https://aiimpacts.org/does-ai-pose-an-existential-risk/&amp;quot;&amp;gt;Is AI an existential threat to humanity?&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ul&amp;gt;
- &amp;lt;/HTML&amp;gt;
+   * [[arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai|List of sources arguing against existential risk from AI]]
+   * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential risk to humanity?]]
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Primary author: Katja Grace&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ //Primary author: Katja Grace//
+ 
  
  
  ===== Notes =====
  
  
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -118,25 +118,16 @@
  
  ===== See also =====
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;ul&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;a href=&amp;quot;/doku.php?id=arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai&amp;quot;&amp;gt;List of sources arguing against existential risk from AI&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;a href=&amp;quot;https://aiimpacts.org/does-ai-pose-an-existential-risk/&amp;quot;&amp;gt;Is AI an existential threat to humanity?&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ul&amp;gt;
- &amp;lt;/HTML&amp;gt;
+   * [[arguments_for_ai_risk:list_of_sources_arguing_against_existential_risk_from_ai|List of sources arguing against existential risk from AI]]
+   * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential risk to humanity?]]
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Primary author: Katja Grace&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ //Primary author: Katja Grace//
+ 
  
  
  ===== Notes =====
  
  
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Quantitative Estimates of AI Risk</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/quantitative_estimates_of_ai_risk?rev=1701454556&amp;do=diff"/>
        <published>2023-12-01T18:15:56+00:00</published>
        <updated>2023-12-01T18:15:56+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/quantitative_estimates_of_ai_risk?rev=1701454556&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -2,9 +2,9 @@
  /*
  COMMENT:
  Things to add to this:
  - https://optimists.ai/2023/11/28/ai-is-easy-to-control/
- - Katja&amp;#039;s EAG 2023 talk, where she says 19% chance
+ 
  */
  // This page is in an early draft. It is very incomplete and may contain errors. // 
  
  Some people who are working in AI Safety have published quantitative estimates for how likely they think it is that AI will pose an existential threat.

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -2,9 +2,9 @@
  /*
  COMMENT:
  Things to add to this:
  - https://optimists.ai/2023/11/28/ai-is-easy-to-control/
- - Katja&amp;#039;s EAG 2023 talk, where she says 19% chance
+ 
  */
  // This page is in an early draft. It is very incomplete and may contain errors. // 
  
  Some people who are working in AI Safety have published quantitative estimates for how likely they think it is that AI will pose an existential threat.

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Existential risk from AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/start?rev=1721153127&amp;do=diff"/>
        <published>2024-07-16T18:05:27+00:00</published>
        <updated>2024-07-16T18:05:27+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/start?rev=1721153127&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -1,5 +1,5 @@
- ====== Existential risk from AI portal ======
+ ====== Existential risk from AI ======
  
  Pages on this topic include:
    * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential threat to humanity?]]
    * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,5 +1,5 @@
- ====== Existential risk from AI portal ======
+ ====== Existential risk from AI ======
  
  Pages on this topic include:
    * [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:start|Is AI an existential threat to humanity?]]
    * [[will_superhuman_ai_be_created:start|Will superhuman AI be created?]]

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Stuart Russell’s description of AI risk</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/stuart_russells_description_of_ai_risk?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/stuart_russells_description_of_ai_risk?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -1 +1,38 @@
+ ====== Stuart Russell’s description of AI risk ======
+ 
+ // Published 11 September, 2017; last updated 28 May, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Stuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Details =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Stuart Russell describes a risk from highly advanced AI &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/the-myth-of-ai#26015&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;. In short:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 30px;&amp;quot;&amp;gt;The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 60px;&amp;quot;&amp;gt;1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 60px;&amp;quot;&amp;gt;2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 30px;&amp;quot;&amp;gt;A system that is optimizing a function of n variables, where the objective depends on a subset of size k&amp;amp;lt;n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,38 @@
+ ====== Stuart Russell’s description of AI risk ======
+ 
+ // Published 11 September, 2017; last updated 28 May, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Stuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Details =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Stuart Russell describes a risk from highly advanced AI &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/the-myth-of-ai#26015&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;. In short:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 30px;&amp;quot;&amp;gt;The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 60px;&amp;quot;&amp;gt;1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 60px;&amp;quot;&amp;gt;2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p style=&amp;quot;padding-left: 30px;&amp;quot;&amp;gt;A system that is optimizing a function of n variables, where the objective depends on a subset of size k&amp;amp;lt;n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Views of prominent AI developers on risk from AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai?rev=1721665469&amp;do=diff"/>
        <published>2024-07-22T16:24:29+00:00</published>
        <updated>2024-07-22T16:24:29+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai?rev=1721665469&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk" />
        <content>&lt;pre&gt;
@@ -1,7 +1,7 @@
  ====== Views of prominent AI developers on risk from AI ======
  
- //This page is in an early draft. It is incomplete and may contain errors.//
+ //This page is a draft, and out of date. It is incomplete and may contain errors.//
  
  People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.
  
  ===== Background =====

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,7 +1,7 @@
  ====== Views of prominent AI developers on risk from AI ======
  
- //This page is in an early draft. It is incomplete and may contain errors.//
+ //This page is a draft, and out of date. It is incomplete and may contain errors.//
  
  People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.
  
  ===== Background =====

&lt;/pre&gt;</summary>
    </entry>
</feed>
