User Tools

Site Tools


responses_to_ai:affordances:lab_affordances

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
Last revision Both sides next revision
responses_to_ai:affordances:lab_affordances [2023/01/26 04:27]
zachsteinperlman created
responses_to_ai:affordances:lab_affordances [2023/07/23 20:42]
katjagrace [Details]
Line 3: Line 3:
 //Published 25 January 2023// //Published 25 January 2023//
  
-This is a list of actions AI labs could take that may be strategically relevant.+This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions).
  
-===== Details =====+===== List =====
  
   * Deploy an AI system   * Deploy an AI system
Line 13: Line 13:
       * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal       * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal
           * This could enable or abate catastrophic risks besides unaligned AI           * This could enable or abate catastrophic risks besides unaligned AI
-  * Do alignment (and related) research (or: decrease the [[|alignment tax]] by doing technical research)+  * Do alignment (and related) research (or: decrease the [[https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment|alignment tax]] by doing technical research)
       * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem|decision theory and strategic interaction]] and maybe [[http://acritch.com/arches/|delegation involving multiple humans or multiple AI systems]]       * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem|decision theory and strategic interaction]] and maybe [[http://acritch.com/arches/|delegation involving multiple humans or multiple AI systems]]
   * Advance global capabilities   * Advance global capabilities
Line 50: Line 50:
   * Capture scarce resources   * Capture scarce resources
       * E.g., language data from language model users       * E.g., language data from language model users
 +
 +//Author: Zach Stein-Perlman//
responses_to_ai/affordances/lab_affordances.txt · Last modified: 2023/07/23 20:54 by katjagrace