This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Last revision Both sides next revision | ||
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:interviews_on_the_strength_of_the_evidence_for_ai_risk_claims [2023/10/12 09:18] rosehadshar |
arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:interviews_on_the_strength_of_the_evidence_for_ai_risk_claims [2023/10/12 11:05] rosehadshar |
||
---|---|---|---|
Line 9: | Line 9: | ||
* We contacted AI researchers we knew (or knew of) at some prominent labs and AI safety organizations and in academia | * We contacted AI researchers we knew (or knew of) at some prominent labs and AI safety organizations and in academia | ||
* This was not a systematic process, and we expect there to be some substantive bias introduced both by who we reached out to and who agreed to be interviewed | * This was not a systematic process, and we expect there to be some substantive bias introduced both by who we reached out to and who agreed to be interviewed | ||
- | * All but one of the people we interviewed are concerned about AI risk and spend some or all of their work time working to reduce it | + | * All of the people we interviewed are concerned about AI risk and spend some or all of their work time working to reduce it |
* On the one hand, this means that they are expert in the topic we’re interested in (the evidence for AI risk claims) | * On the one hand, this means that they are expert in the topic we’re interested in (the evidence for AI risk claims) | ||
* On the other hand, it also means that they have an incentive to interpret the evidence as stronger rather than weaker | * On the other hand, it also means that they have an incentive to interpret the evidence as stronger rather than weaker |