This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revision Both sides next revision | ||
responses_to_ai:affordances:lab_affordances [2023/02/16 08:41] zachsteinperlman ↷ Page moved from uncategorized:lab_affordances to responses_to_ai:affordances:lab_affordances |
responses_to_ai:affordances:lab_affordances [2023/05/07 21:53] zachsteinperlman |
||
---|---|---|---|
Line 13: | Line 13: | ||
* Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | * Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal | ||
* This could enable or abate catastrophic risks besides unaligned AI | * This could enable or abate catastrophic risks besides unaligned AI | ||
- | * Do alignment (and related) research (or: decrease the [[|alignment tax]] by doing technical research) | + | * Do alignment (and related) research (or: decrease the [[https:// |
* Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | * Including interpretability and work on solving or avoiding alignment-adjacent problems like [[https:// | ||
* Advance global capabilities | * Advance global capabilities | ||
Line 50: | Line 50: | ||
* Capture scarce resources | * Capture scarce resources | ||
* E.g., language data from language model users | * E.g., language data from language model users | ||
+ | |||
+ | //Author: Zach Stein-Perlman// |