This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:resisted_technological_temptations_project [2022/11/15 06:57] katjagrace [Methods] |
responses_to_ai:technological_inevitability:incentivized_technologies_not_pursued:resisted_technological_temptations_project [2023/10/12 21:44] (current) 68.252.234.79 ↷ Links adapted because of a move operation |
||
---|---|---|---|
Line 5: | Line 5: | ||
===== Purposes ===== | ===== Purposes ===== | ||
- | This project aims to help answer the question, 'Under what circumstances can large concrete incentives to pursue technologies be overcome by forces motivated by uninternalized downsides, such as ethical concerns, risks to other people with no recourse, or risks the decisionmaker does not believe in?'. | + | This project aims to help answer the question, 'Under what circumstances can large concrete incentives to pursue technologies be overcome by forces motivated by uninternalized downsides, such as ethical concerns, risks to other people with no recourse, or risks the decisionmaker does not believe in?' |
Answering this question is relevant to predicting how attitudes to potentially dangerous AI systems might affect the trajectory of AI development. | Answering this question is relevant to predicting how attitudes to potentially dangerous AI systems might affect the trajectory of AI development. |