User Tools

Site Tools


responses_to_ai:affordances:lab_affordances

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
responses_to_ai:affordances:lab_affordances [2023/07/23 20:42]
katjagrace [Details]
responses_to_ai:affordances:lab_affordances [2023/07/23 20:54] (current)
katjagrace [List]
Line 8: Line 8:
  
   * Deploy an AI system   * Deploy an AI system
-  * Pursue capabilities+  * Pursue AI capabilities
       * Pursue risky (and more or less alignable systems) systems       * Pursue risky (and more or less alignable systems) systems
       * Pursue systems that enable risky (and more or less alignable) systems       * Pursue systems that enable risky (and more or less alignable) systems
Line 36: Line 36:
       * Make demos or public statements       * Make demos or public statements
       * Release or deploy AI systems       * Release or deploy AI systems
-  * Improve their culture or [[https://www.lesswrong.com/s/v55BhXbpJuaExkpcD/p/keiYkaeoLHoKK4LYA|operational adequacy]]+  * Improve their culture or operations
       * Improve operational security       * Improve operational security
       * Affect attitudes of effective leadership       * Affect attitudes of effective leadership
       * Affect attitudes of researchers       * Affect attitudes of researchers
-      * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI's]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant +      * Make a plan for alignment (e.g., [[https://openai.com/blog/our-approach-to-alignment-research/|OpenAI's]]); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant 
-      * Make a plan for what to do with powerful AI (e.g., [[https://arbital.com/p/cev/|CEV]] or some specification of [[https://forum.effectivealtruism.org/topics/long-reflection|long reflection]]), share it, update and improve it, and coordinate with other actors if relevant+      * Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or specification for parties interacting peacefully)
       * Improve their ability to make themselves (selectively) transparent       * Improve their ability to make themselves (selectively) transparent
   * Try to better understand the future, the strategic landscape, risks, and possible actions   * Try to better understand the future, the strategic landscape, risks, and possible actions
Line 51: Line 51:
       * E.g., language data from language model users       * E.g., language data from language model users
  
-//Author: Zach Stein-Perlman//+//Primary author: Zach Stein-Perlman//
responses_to_ai/affordances/lab_affordances.1690144979.txt.gz · Last modified: 2023/07/23 20:42 by katjagrace