arguments_for_ai_risk:quantitative_estimates_of_ai_risk
Differences
This shows you the differences between two versions of the page.
| Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
arguments_for_ai_risk:quantitative_estimates_of_ai_risk [2023/06/22 17:01] jeffreyheninger |
arguments_for_ai_risk:quantitative_estimates_of_ai_risk [2023/12/01 18:15] (current) harlanstewart |
| ====== Quantitative Estimates of AI Risk ====== | ====== Quantitative Estimates of AI Risk ====== |
| | /* |
| | COMMENT: |
| | Things to add to this: |
| | - https://optimists.ai/2023/11/28/ai-is-easy-to-control/ |
| |
| | */ |
| // This page is in an early draft. It is very incomplete and may contain errors. // | // This page is in an early draft. It is very incomplete and may contain errors. // |
| |
| <td><a href="https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more">My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI</a></td> | <td><a href="https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more">My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI</a></td> |
| <td>No</td> | <td>No</td> |
| | </tr> |
| | <tr> |
| | <td>Scott Aaronson</td> |
| | <td>2023</td> |
| | <td>The generative AI race, which started in earnest around 2016 or 2017 with the founding of OpenAI, to play a central causal role in the extinction of humanity</td> |
| | <td>0.02</td> |
| | <td><a href="https://scottaaronson.blog/?p=7064">Why am I not terrified of AI?</td> |
| | <td>Yes</td> |
| </tr> | </tr> |
| |
arguments_for_ai_risk/quantitative_estimates_of_ai_risk.1687453272.txt.gz · Last modified: 2023/06/22 17:01 by jeffreyheninger