This shows you the differences between two versions of the page.
Next revision | Previous revision Last revision Both sides next revision | ||
responses_to_ai:affordances:lab_affordances [2023/01/26 04:27] zachsteinperlman created |
responses_to_ai:affordances:lab_affordances [2023/07/23 20:42] katjagrace [Details] |
||
---|---|---|---|
Line 3: | Line 3: | ||
//Published 25 January 2023// | //Published 25 January 2023// | ||
- | This is a list of actions AI labs could take that may be strategically relevant. | + | This is a list of actions AI labs could take that may be strategically relevant |
- | ===== Details | + | ===== List ===== |
* Deploy an AI system | * Deploy an AI system | ||
Line 13: | Line 13: | ||
* Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | ||
* This could enable or abate catastrophic risks besides unaligned AI | * This could enable or abate catastrophic risks besides unaligned AI | ||
- | * Do alignment (and related) research (or: decrease the [[|alignment tax]] by doing technical research) | + | * Do alignment (and related) research (or: decrease the [[https:// |
* Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | ||
* Advance global capabilities | * Advance global capabilities | ||
Line 50: | Line 50: | ||
* Capture scarce resources | * Capture scarce resources | ||
* E.g., language data from language model users | * E.g., language data from language model users | ||
+ | |||
+ | //Author: Zach Stein-Perlman// |