This shows you the differences between two versions of the page.
| Both sides previous revision Previous revision Next revision | Previous revision | ||
|
responses_to_ai:affordances:lab_affordances [2023/02/06 00:22] zachsteinperlman |
responses_to_ai:affordances:lab_affordances [2023/07/23 20:54] (current) katjagrace [List] |
||
|---|---|---|---|
| Line 5: | Line 5: | ||
| This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions). | This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions). | ||
| - | ===== Details | + | ===== List ===== |
| * Deploy an AI system | * Deploy an AI system | ||
| - | * Pursue capabilities | + | * Pursue |
| * Pursue risky (and more or less alignable systems) systems | * Pursue risky (and more or less alignable systems) systems | ||
| * Pursue systems that enable risky (and more or less alignable) systems | * Pursue systems that enable risky (and more or less alignable) systems | ||
| * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | ||
| * This could enable or abate catastrophic risks besides unaligned AI | * This could enable or abate catastrophic risks besides unaligned AI | ||
| - | * Do alignment (and related) research (or: decrease the [[|alignment tax]] by doing technical research) | + | * Do alignment (and related) research (or: decrease the [[https:// |
| * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | ||
| * Advance global capabilities | * Advance global capabilities | ||
| Line 36: | Line 36: | ||
| * Make demos or public statements | * Make demos or public statements | ||
| * Release or deploy AI systems | * Release or deploy AI systems | ||
| - | * Improve their culture or [[https:// | + | * Improve their culture or operations |
| * Improve operational security | * Improve operational security | ||
| * Affect attitudes of effective leadership | * Affect attitudes of effective leadership | ||
| * Affect attitudes of researchers | * Affect attitudes of researchers | ||
| - | * Make a plan for alignment (e.g., [[https:// | + | * Make a plan for alignment (e.g., [[https:// |
| - | * Make a plan for what to do with powerful AI (e.g., | + | * Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or a specification |
| * Improve their ability to make themselves (selectively) transparent | * Improve their ability to make themselves (selectively) transparent | ||
| * Try to better understand the future, the strategic landscape, risks, and possible actions | * Try to better understand the future, the strategic landscape, risks, and possible actions | ||
| Line 50: | Line 50: | ||
| * Capture scarce resources | * Capture scarce resources | ||
| * E.g., language data from language model users | * E.g., language data from language model users | ||
| + | |||
| + | //Primary author: Zach Stein-Perlman// | ||