User Tools

Site Tools


arguments_for_ai_risk:quantitative_estimates_of_ai_risk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:quantitative_estimates_of_ai_risk [2023/06/22 00:35]
jeffreyheninger Adding a column for Most Recent?
arguments_for_ai_risk:quantitative_estimates_of_ai_risk [2023/12/01 18:15] (current)
harlanstewart
Line 1: Line 1:
 ====== Quantitative Estimates of AI Risk ====== ====== Quantitative Estimates of AI Risk ======
 +/*
 +COMMENT:
 +Things to add to this:
 +- https://optimists.ai/2023/11/28/ai-is-easy-to-control/
  
 +*/
 // This page is in an early draft. It is very incomplete and may contain errors. //  // This page is in an early draft. It is very incomplete and may contain errors. // 
  
Line 18: Line 23:
 The table below includes estimates from individuals working in AI Safety of how likely very bad outcomes due to AI are.  The table below includes estimates from individuals working in AI Safety of how likely very bad outcomes due to AI are. 
  
-Many of the individuals expressed [[https://en.wikipedia.org/wiki/Knightian_uncertainty|Knightian uncertainty]] when making their estimates, saying that their probability varies day-to-day, or that the estimate is currently in development, or that this is a very quick-and-dirty estimate. People who have explicitly said something like this include Katja Grace, Joe Carlsmith, Peter Wildeford, Nate Soares, Paul Christiano, and others. These estimates should not be treated as definitive statements of these individuals' beliefs, but rather as glimpses of their thinking at that moment.+Many of the individuals expressed [[https://en.wikipedia.org/wiki/Knightian_uncertainty|Knightian uncertainty]] when making their estimates, saying that their probability varies day-to-day, or that the estimate is currently in development, or that this is a very quick-and-dirty estimate. People who have explicitly said something like this include Katja Grace, Joseph Carlsmith, Peter Wildeford, Nate Soares, Paul Christiano, and others. These estimates should not be treated as definitive statements of these individuals' beliefs, but rather as glimpses of their thinking at that moment.
  
 Each estimate includes:  Each estimate includes: 
Line 75: Line 80:
   </tr>   </tr>
   <tr>   <tr>
-    <td>Joe Carlsmith</td>+    <td>Joseph Carlsmith</td>
     <td>2021</td>     <td>2021</td>
     <td>Existential catastrophe by 2070 from advanced, planning, strategic AI</td>     <td>Existential catastrophe by 2070 from advanced, planning, strategic AI</td>
Line 83: Line 88:
   </tr>   </tr>
   <tr>   <tr>
-    <td>Joe Carlsmith</td>+    <td>Joseph Carlsmith</td>
     <td>2022</td>     <td>2022</td>
     <td>Existential catastrophe by 2070 from advanced, planning, strategic AI</td>     <td>Existential catastrophe by 2070 from advanced, planning, strategic AI</td>
Line 242: Line 247:
     <td>No</td>     <td>No</td>
   </tr>   </tr>
 +  <tr>
 +    <td>Andrew Critch</td>
 +    <td>2023</td>
 +    <td>Humanity not surviving the next 50 years</td>
 +    <td>0.8</td>
 +    <td><a href="https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more">My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI</a></td>
 +    <td>Yes</td>
 +  </tr>
 +  <tr>
 +    <td>Andrew Critch</td>
 +    <td>2023</td>
 +    <td>Humanity not surviving the next 50 years, without a major international regulatory effort to control how AI is used</td>
 +    <td>0.9+</td>
 +    <td><a href="https://www.lesswrong.com/posts/gZkYvA6suQJthvj4E/my-may-2023-priorities-for-ai-x-safety-more-empathy-more">My May 2023 priorities for AI x-safety: more empathy, more unification of concerns, and less vilification of OpenAI</a></td>
 +    <td>No</td>
 +  </tr>
 +  <tr>
 +    <td>Scott Aaronson</td>
 +    <td>2023</td>
 +    <td>The generative AI race, which started in earnest around 2016 or 2017 with the founding of OpenAI, to play a central causal role in the extinction of humanity</td>
 +    <td>0.02</td>
 +    <td><a href="https://scottaaronson.blog/?p=7064">Why am I not terrified of AI?</td>
 +    <td>Yes</td>
 +  </tr>
 +
  
 </table> </table>
Line 309: Line 339:
 Different people use different framings to arrive at their estimate of AI risk. The most common framing seems to be to describe a model of what the risk from advanced AI looks like, assign probabilities to various components of that model, and then calculate the existential risk from AI on the basis of this model. Another framing is to describe various scenarios for the future of AI, assign probabilities to the various scenarios, and then add together the probabilities of the different scenarios to determine the total existential risk from AI. There are also some people who give a probability without describing what framing they used to get this number. Different people use different framings to arrive at their estimate of AI risk. The most common framing seems to be to describe a model of what the risk from advanced AI looks like, assign probabilities to various components of that model, and then calculate the existential risk from AI on the basis of this model. Another framing is to describe various scenarios for the future of AI, assign probabilities to the various scenarios, and then add together the probabilities of the different scenarios to determine the total existential risk from AI. There are also some people who give a probability without describing what framing they used to get this number.
  
-Below is an example of each of these two framings, due to Joe Carlsmith and Peter Wildeford, respectively. Both individuals have updated their estimates since publishing their framing, so neither probability breakdown reflects the author's most recent estimate of AI risk. They are included to show how these framings work.+Below is an example of each of these two framings, due to Joseph Carlsmith and Peter Wildeford, respectively. Both individuals have updated their estimates since publishing their framing, so neither probability breakdown reflects the author's most recent estimate of AI risk. They are included to show how these framings work.
  
 ==== Model ==== ==== Model ====
  
-One example of using a model to calculate the existential risk from AI is due to Joe Carlsmith. He calculates AI-risk by 2070 by breaking it down in the following way:+One example of using a model to calculate the existential risk from AI is due to Joseph Carlsmith. He calculates AI-risk by 2070 by breaking it down in the following way:
  
   - It will become possible and financially feasible to build APS [advanced, planning, strategic] systems.  **65%**   - It will become possible and financially feasible to build APS [advanced, planning, strategic] systems.  **65%**
arguments_for_ai_risk/quantitative_estimates_of_ai_risk.1687394147.txt.gz · Last modified: 2023/06/22 00:35 by jeffreyheninger