Demonstrate AI risk (or provide evidence about it)
Negotiate with other labs, or affect other labs' incentives or meta-level beliefs
Affect public opinion, media, and politics
Publish research
Make demos or public statements
Release or deploy AI systems
Improve their culture or operations
Improve operational security
Affect attitudes of effective leadership
Affect attitudes of researchers
Make a plan for alignment (e.g., OpenAI's); share it; update and improve it; and coordinate with capabilities researchers, alignment researchers, or other labs if relevant
Make plans for what to do with powerful AI (e.g. a process for producing powerful aligned AI given some type of advanced AI system, or a specification for parties interacting peacefully)
Improve their ability to make themselves (selectively) transparent
Try to better understand the future, the strategic landscape, risks, and possible actions
Acquire resources (money, hardware, talent, influence over states, status/prestige/trust, etc.)
Affect other actors' resources
Affect the flow of talent between labs or between projects