This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
responses_to_ai:affordances:lab_affordances [2023/02/16 08:41] zachsteinperlman ↷ Links adapted because of a move operation |
responses_to_ai:affordances:lab_affordances [2023/07/23 20:42] katjagrace [Details] |
||
---|---|---|---|
Line 5: | Line 5: | ||
This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions). | This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions). | ||
- | ===== Details | + | ===== List ===== |
* Deploy an AI system | * Deploy an AI system | ||
Line 13: | Line 13: | ||
* Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | ||
* This could enable or abate catastrophic risks besides unaligned AI | * This could enable or abate catastrophic risks besides unaligned AI | ||
- | * Do alignment (and related) research (or: decrease the [[uncategorized:start|alignment tax]] by doing technical research) | + | * Do alignment (and related) research (or: decrease the [[https:// |
* Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | ||
* Advance global capabilities | * Advance global capabilities | ||
Line 50: | Line 50: | ||
* Capture scarce resources | * Capture scarce resources | ||
* E.g., language data from language model users | * E.g., language data from language model users | ||
+ | |||
+ | //Author: Zach Stein-Perlman// |