<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki responses_to_ai:affordances</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-05-17T16:30:09+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>Affordances for AI labs</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/responses_to_ai/affordances/lab_affordances?rev=1690145666&amp;do=diff"/>
        <published>2023-07-23T20:54:26+00:00</published>
        <updated>2023-07-23T20:54:26+00:00</updated>
        <id>https://wiki.aiimpacts.org/responses_to_ai/affordances/lab_affordances?rev=1690145666&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="responses_to_ai:affordances" />
        <content>&lt;pre&gt;
@@ -6,9 +6,9 @@
  
  ===== List =====
  
    * Deploy an AI system
-   * Pursue capabilities
+   * Pursue AI capabilities
        * Pursue risky (and more or less alignable systems) systems
        * Pursue systems that enable risky (and more or less alignable) systems
        * Pursue weak AI that&amp;#039;s mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal
            * This could enable or abate catastrophic risks besides unaligned AI
@@ -34,14 +34,14 @@
    * Affect public opinion, media, and politics
        * Publish research
        * Make demos or public statements
        * Release or deploy AI systems
-   * Improve their culture or [[https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/keiYkaeoLHoKK4LYA|operational adequacy]]
+   * Improve their culture or operations
        * Improve operational security
        * Affect attitudes of effective leadership
        * Affect attitudes of researchers
-       * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI&amp;#039;s]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
-       * Make a plan for what to do with powerful AI (e.g., [[https://arbital.com/p/cev/|CEV]] or some specification of [[https://forum.effectivealtruism.org/topics/long-reflection|long reflection]]), share it, update and improve it, and coordinate with other actors if relevant
+       * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI&amp;#039;s]]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
+       * Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or a specification for parties interacting peacefully)
        * Improve their ability to make themselves (selectively) transparent
    * Try to better understand the future, the strategic landscape, risks, and possible actions
    * Acquire resources (money, hardware, talent, influence over states, status/prestige/trust, etc.)
    * Affect other actors&amp;#039; resources
@@ -49,5 +49,5 @@
    * Plan, execute, or participate in [[https://arbital.com/p/pivotal/|pivotal acts]] or [[https://www.lesswrong.com/posts/etNJcXCsKC6izQQZj/pivotal-outcomes-and-pivotal-processes|processes]]
    * Capture scarce resources
        * E.g., language data from language model users
  
- //Author: Zach Stein-Perlman//
+ //Primary author: Zach Stein-Perlman//

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -6,9 +6,9 @@
  
  ===== List =====
  
    * Deploy an AI system
-   * Pursue capabilities
+   * Pursue AI capabilities
        * Pursue risky (and more or less alignable systems) systems
        * Pursue systems that enable risky (and more or less alignable) systems
        * Pursue weak AI that&amp;#039;s mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal
            * This could enable or abate catastrophic risks besides unaligned AI
@@ -34,14 +34,14 @@
    * Affect public opinion, media, and politics
        * Publish research
        * Make demos or public statements
        * Release or deploy AI systems
-   * Improve their culture or [[https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/keiYkaeoLHoKK4LYA|operational adequacy]]
+   * Improve their culture or operations
        * Improve operational security
        * Affect attitudes of effective leadership
        * Affect attitudes of researchers
-       * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI&amp;#039;s]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
-       * Make a plan for what to do with powerful AI (e.g., [[https://arbital.com/p/cev/|CEV]] or some specification of [[https://forum.effectivealtruism.org/topics/long-reflection|long reflection]]), share it, update and improve it, and coordinate with other actors if relevant
+       * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI&amp;#039;s]]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
+       * Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or a specification for parties interacting peacefully)
        * Improve their ability to make themselves (selectively) transparent
    * Try to better understand the future, the strategic landscape, risks, and possible actions
    * Acquire resources (money, hardware, talent, influence over states, status/prestige/trust, etc.)
    * Affect other actors&amp;#039; resources
@@ -49,5 +49,5 @@
    * Plan, execute, or participate in [[https://arbital.com/p/pivotal/|pivotal acts]] or [[https://www.lesswrong.com/posts/etNJcXCsKC6izQQZj/pivotal-outcomes-and-pivotal-processes|processes]]
    * Capture scarce resources
        * E.g., language data from language model users
  
- //Author: Zach Stein-Perlman//
+ //Primary author: Zach Stein-Perlman//

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Affordances for states</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/responses_to_ai/affordances/state_affordances?rev=1683496407&amp;do=diff"/>
        <published>2023-05-07T21:53:27+00:00</published>
        <updated>2023-05-07T21:53:27+00:00</updated>
        <id>https://wiki.aiimpacts.org/responses_to_ai/affordances/state_affordances?rev=1683496407&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="responses_to_ai:affordances" />
        <content>&lt;pre&gt;
@@ -19,4 +19,6 @@
    * Negotiate with other actors, or affect other actors&amp;#039; incentives or meta-level beliefs
    * Make agreements with other actors (notably including contracts and treaties)
    * Establish standards, norms, or principles
    * Make unilateral declarations (as an international legal commitment)
+ 
+ //Author: Zach Stein-Perlman//

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -19,4 +19,6 @@
    * Negotiate with other actors, or affect other actors&amp;#039; incentives or meta-level beliefs
    * Make agreements with other actors (notably including contracts and treaties)
    * Establish standards, norms, or principles
    * Make unilateral declarations (as an international legal commitment)
+ 
+ //Author: Zach Stein-Perlman//

&lt;/pre&gt;</summary>
    </entry>
</feed>
