User Tools

Site Tools


arguments_for_ai_risk:incentives_to_create_ai_systems_known_to_pose_extinction_risks

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

arguments_for_ai_risk:incentives_to_create_ai_systems_known_to_pose_extinction_risks [2022/09/21 07:37]
127.0.0.1 external edit
arguments_for_ai_risk:incentives_to_create_ai_systems_known_to_pose_extinction_risks [2023/06/08 21:46] (current)
jeffreyheninger Dollar signs
Line 51: Line 51:
 <HTML> <HTML>
 <ol> <ol>
-<li><div class="li">A person faces the choice of using an AI lawyer system for $100, or a human lawyer for $10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.</div></li>+<li><div class="li">A person faces the choice of using an AI lawyer system for \$100, or a human lawyer for \$10,000. They believe that the AI lawyer system is poorly motivated and agentic, and that movement of resources to such systems is gradually disempowering humanity, which they care about. Nonetheless, their action only contributes a small amount to this problem, and they are not willing to raise tens of thousands of dollars to avoid that harm.</div></li>
 <li><div class="li">A person faces the choice of deploying the largest scale model to date, or trying to call off the project. They believe that at some scale, a model will become an existential threat to humanity. However they are very unsure at what scale, and estimate that the model in front of them only has a 1% chance of being the dangerous one. They value the future of humanity a lot, but not ten times more than their career, and calling off the project would be a huge hit, for only 1% of the future of humanity.</div></li> <li><div class="li">A person faces the choice of deploying the largest scale model to date, or trying to call off the project. They believe that at some scale, a model will become an existential threat to humanity. However they are very unsure at what scale, and estimate that the model in front of them only has a 1% chance of being the dangerous one. They value the future of humanity a lot, but not ten times more than their career, and calling off the project would be a huge hit, for only 1% of the future of humanity.</div></li>
 </ol> </ol>
arguments_for_ai_risk/incentives_to_create_ai_systems_known_to_pose_extinction_risks.1663745861.txt.gz · Last modified: 2022/09/21 07:37 by 127.0.0.1