User Tools

Site Tools


responses_to_ai:affordances:lab_affordances

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
responses_to_ai:affordances:lab_affordances [2023/02/16 08:41]
zachsteinperlman ↷ Links adapted because of a move operation
responses_to_ai:affordances:lab_affordances [2023/07/23 20:42]
katjagrace [Details]
Line 5: Line 5:
 This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions). This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions).
  
-===== Details =====+===== List =====
  
   * Deploy an AI system   * Deploy an AI system
Line 13: Line 13:
       * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal       * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal
           * This could enable or abate catastrophic risks besides unaligned AI           * This could enable or abate catastrophic risks besides unaligned AI
-  * Do alignment (and related) research (or: decrease the [[uncategorized:start|alignment tax]] by doing technical research)+  * Do alignment (and related) research (or: decrease the [[https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment|alignment tax]] by doing technical research)
       * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem|decision theory and strategic interaction]] and maybe [[http://acritch.com/arches/|delegation involving multiple humans or multiple AI systems]]       * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem|decision theory and strategic interaction]] and maybe [[http://acritch.com/arches/|delegation involving multiple humans or multiple AI systems]]
   * Advance global capabilities   * Advance global capabilities
Line 50: Line 50:
   * Capture scarce resources   * Capture scarce resources
       * E.g., language data from language model users       * E.g., language data from language model users
 +
 +//Author: Zach Stein-Perlman//
responses_to_ai/affordances/lab_affordances.txt · Last modified: 2023/07/23 20:54 by katjagrace