Affordances for AI labs
Published 25 January 2023
This is a list of actions AI labs could take that may be strategically relevant (or consequences or characteristics of possible actions).
List
Deploy an AI system
Pursue AI capabilities
Pursue risky (and more or less alignable systems) systems
Pursue systems that enable risky (and more or less alignable) systems
Pursue weak AI that's mostly orthogonal to progress in risky stuff for a specific (strategically significant) task or goal
Do alignment (and related) research (or: decrease the
alignment tax by doing technical research)
Advance global capabilities
Advance alignment (or: decrease the alignment tax) in ways other than doing technical research
Attempt to align a particular system (or: try to pay the alignment tax)
Interact with other labs
Coordinate with other labs (notably including coordinating to avoid risky systems)
Make themselves transparent to each other
Make themselves transparent to an external auditor
Merge
Effectively commit to share upsides
-
Affect what other labs believe on the object level (about AI capabilities or risk in general, or regarding particular memes)
Negotiate with other labs, or affect other labs' incentives or meta-level beliefs
Affect public opinion, media, and politics
Improve their culture or operations
Improve operational security
Affect attitudes of effective leadership
Affect attitudes of researchers
Make a plan for alignment (e.g.,
OpenAI's); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or a specification for parties interacting peacefully)
Improve their ability to make themselves (selectively) transparent
Try to better understand the future, the strategic landscape, risks, and possible actions
Acquire resources (money, hardware, talent, influence over states, status/prestige/trust, etc.)
Affect other actors' resources
-
Capture scarce resources
Primary author: Zach Stein-Perlman