This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Last revision Both sides next revision | ||
responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:resisted_technological_temptations_project [2022/11/15 06:57] katjagrace [Methods] |
responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:resisted_technological_temptations_project [2023/10/12 17:56] jeffreyheninger ↷ Page name changed from responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:start to responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:resisted_technological_temptations_project |
||
---|---|---|---|
Line 5: | Line 5: | ||
===== Purposes ===== | ===== Purposes ===== | ||
- | This project aims to help answer the question, 'Under what circumstances can large concrete incentives to pursue technologies be overcome by forces motivated by uninternalized downsides, such as ethical concerns, risks to other people with no recourse, or risks the decisionmaker does not believe in?'. | + | This project aims to help answer the question, 'Under what circumstances can large concrete incentives to pursue technologies be overcome by forces motivated by uninternalized downsides, such as ethical concerns, risks to other people with no recourse, or risks the decisionmaker does not believe in?' |
Answering this question is relevant to predicting how attitudes to potentially dangerous AI systems might affect the trajectory of AI development. | Answering this question is relevant to predicting how attitudes to potentially dangerous AI systems might affect the trajectory of AI development. | ||
Line 21: | Line 21: | ||
===== Outputs ===== | ===== Outputs ===== | ||
- | Case studies are listed at [[responses_to_ai: | + | Case studies are listed at [[responses_to_ai: |