User Tools

Site Tools


uncategorized:ai_safety_arguments_affected_by_chaos

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
uncategorized:ai_safety_arguments_affected_by_chaos [2023/03/31 23:27]
jeffreyheninger created
uncategorized:ai_safety_arguments_affected_by_chaos [2023/04/12 22:32] (current)
katjagrace [AI Safety Arguments Affected by Chaos]
Line 2: Line 2:
  
 //Created 31 March, 2023. Last updated 31 March, 2023.// //Created 31 March, 2023. Last updated 31 March, 2023.//
 +
 +//This page is under review and may be updated soon.//
  
 Chaos theory allows us to show that some predictions cannot be reliably made, even using arbitrary intelligence. Some things about human brains seem to be in that category, which affects how advanced AI might interact with humans. Chaos theory allows us to show that some predictions cannot be reliably made, even using arbitrary intelligence. Some things about human brains seem to be in that category, which affects how advanced AI might interact with humans.
Line 21: Line 23:
  
 A major reason why we might care about how much headroom there is above human intelligence is to attempt to understand takeover scenarios. If a superintelligent AI were to try to wrest control of the future away from humanity, how much chance would we have of preventing it? If humans were close to the ceiling on lots of skills at this point, perhaps aided by narrow AI, then we might not have that much of a disadvantage. If humans were far from the ceiling on many important skills, then we would expect to be at a serious disadvantage.((Eliezer Yudkowsky. //My Childhood Role Model.// LessWrong. (2008) [[https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model]].)) An argument to this effect has been made by Barak and Edelman, and is discussed below. A major reason why we might care about how much headroom there is above human intelligence is to attempt to understand takeover scenarios. If a superintelligent AI were to try to wrest control of the future away from humanity, how much chance would we have of preventing it? If humans were close to the ceiling on lots of skills at this point, perhaps aided by narrow AI, then we might not have that much of a disadvantage. If humans were far from the ceiling on many important skills, then we would expect to be at a serious disadvantage.((Eliezer Yudkowsky. //My Childhood Role Model.// LessWrong. (2008) [[https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model]].)) An argument to this effect has been made by Barak and Edelman, and is discussed below.
 +
 +For many tasks, having a high skill level requires you to be able to make predictions about what will happen in the future, contingent on what choices you choose to make. Chaos theory provides a way to prove that making reliable predictions about certain things is impossible for an arbitrary intelligence given a small amount of initial uncertainty. For these predictions, the skill ceiling is low. It is often still possible to make a prediction about something, perhaps the statistics of the motion or perhaps something else even less related to the original question, but not to predict what exactly will happen in the future.((What sorts of predictions can and cannot be made when there is chaos is discussed in Section 7 of the accompanying report. \\ Heninger & Johnson. //Chaos and Intrinsic Unpredictability.// AI Impacts. [[http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf]].)) Whenever predicting chaotic motion is important, we should not expect that AI will be able to perform arbitrarily well.
  
 Along with proving the existence of some skill ceilings, chaos theory might also help us better understand the skill landscape: How much intelligence is required to achieve a particular skill level? Knowing what the steepness of this slope is (the marginal improvement of skill with additional intelligence) would inform expectations of takeoff speed, and what we might expect from interactions with AIs which are slightly more intelligent than humans. Along with proving the existence of some skill ceilings, chaos theory might also help us better understand the skill landscape: How much intelligence is required to achieve a particular skill level? Knowing what the steepness of this slope is (the marginal improvement of skill with additional intelligence) would inform expectations of takeoff speed, and what we might expect from interactions with AIs which are slightly more intelligent than humans.
Line 28: Line 32:
 The answer obviously depends on what skill we are looking at. Some tasks are so easy that humans are already nearly perfect at them. Other tasks are more difficult and it is clear that humans are far from the optimal skill level. There is also a third kind of task which is so difficult that it is impossible for any intelligent being in this world to perform them well. The answer obviously depends on what skill we are looking at. Some tasks are so easy that humans are already nearly perfect at them. Other tasks are more difficult and it is clear that humans are far from the optimal skill level. There is also a third kind of task which is so difficult that it is impossible for any intelligent being in this world to perform them well.
  
-For examples of these three kinds of skills, we can look at the games of tic-tac-toe, chess, and pinball, respectively. Planning what your next move should be in tic-tac-toe is trivial: almost everyone learns how to always force a draw as a child. In chess, it is much harder to figure out what you should do, but still possible. Humans are clearly not performing optimally: the most capable artificial intelligence has been better at chess than the most capable human since 1997. Planning your next move in pinball, however, is impossible if the ball bounces more than a few times before returning to your flippers, because the motion of the pinball is chaotic.((The pinball example is discussed in depth in [[https://aiimpacts.org/you-cant-predict-a-game-of-pinball/|You Can’t Predict a Game of Pinball]].)) This is true for either human players or for an arbitrary artificial intelligence. To perform better than a human, an artificial intelligence would have to rely on faster reaction time, or on hitting the ball in a way that avoids the most chaotic parts of the board+For examples of these three kinds of skills, we can look at the games of tic-tac-toe, chess, and pinball, respectively. Planning what your next move should be in tic-tac-toe is trivial: almost everyone learns how to always force a draw as a child. In chess, it is much harder to figure out what you should do, but still possible. Humans are clearly not performing optimally: the most capable artificial intelligence has been better at chess than the most capable human since 1997. Planning your next move in pinball, however, is impossible if the ball bounces more than a few times before returning to your flippers, because the motion of the pinball is chaotic.((The pinball example is discussed in depth in [[https://blog.aiimpacts.org/p/you-cant-predict-a-game-of-pinball|You Can’t Predict a Game of Pinball]].)) This is true for either human players or for an arbitrary artificial intelligence. To perform better than a human, an artificial intelligence would have to rely on faster reaction time, or on hitting the ball in a way that avoids the most chaotic parts of the board.
- +
-For many tasks, having a high skill level requires you to be able to make predictions about what will happen in the future, contingent on what choices you choose to make. Chaos theory provides a way to prove that making reliable predictions about certain things is impossible for an arbitrary intelligence given a small amount of initial uncertainty. For these predictions, the skill ceiling is low. It is often still possible to make a prediction about something, perhaps the statistics of the motion or perhaps something else even less related to the original question, but not to predict what exactly will happen in the future.((What sorts of predictions can and cannot be made when there is chaos is discussed in [[https://docs.google.com/document/d/1HyRd0SyDGIG49vkCKssD2HmPnN8Gbrw1duPXrLaRL9U/edit?usp=sharing|Chaos and Intrinsic Unpredictability]].)) Whenever predicting chaotic motion is important, we should not expect that AI will be able to perform arbitrarily well.+
  
 === Barak and Edelman === === Barak and Edelman ===
Line 38: Line 40:
 This argument seems to point towards something interesting, but needs some modifications. ‘Time horizon’ should include some notion of how chaotic or complex the system is. More importantly, the skill needed to reach optimal performance does not have to decline for chaotic systems. Figuring out the heuristics and how to use them is a difficult task. Instead, what declines is the marginal benefit of increasing intelligence. If there are any tradeoffs between intelligence and some other skills useful for CEOs (e.g. charisma), then this would explain the Swedish data as well. This argument seems to point towards something interesting, but needs some modifications. ‘Time horizon’ should include some notion of how chaotic or complex the system is. More importantly, the skill needed to reach optimal performance does not have to decline for chaotic systems. Figuring out the heuristics and how to use them is a difficult task. Instead, what declines is the marginal benefit of increasing intelligence. If there are any tradeoffs between intelligence and some other skills useful for CEOs (e.g. charisma), then this would explain the Swedish data as well.
  
-Their conclusion that humans aided by narrow AI could effectively compete with superintelligent AI seems to me to be unlikely to be true. There are lots of things which humans are bad at which do not seem to be inherently unpredictable and intelligence gives some advantage even when there is chaos. Their argument suggests that the difference in skill is smaller than you might expect, but does not show that it is zero.+Their conclusion that humans aided by narrow AI could effectively compete with superintelligent AI seems to me to be unlikely to be true. There are lots of things which humans are bad at which do not seem to be inherently unpredictable and intelligence gives some advantage even when there is chaos. Their argument suggests that the difference in skill is smaller than you might expect, but does not show that it is close to zero.
  
 === Things We Cannot Predict Because of Chaos === === Things We Cannot Predict Because of Chaos ===
Line 44: Line 46:
 There are many things that humans have difficulty predicting. For some of these things, better predictions are possible, if only we were more intelligent. For others, there are physical reasons why we are incapable of predicting them. If there is something that we cannot predict because of chaos, an arbitrarily intelligent AI would not be able to predict it either. There are many things that humans have difficulty predicting. For some of these things, better predictions are possible, if only we were more intelligent. For others, there are physical reasons why we are incapable of predicting them. If there is something that we cannot predict because of chaos, an arbitrarily intelligent AI would not be able to predict it either.
  
-The classic example of chaos in nature is the weather. Predicting the weather more than 10 days out is impossible. It is possible to make some statistical predictions, most often by looking at averages of historical weather data. Despite being chaotic, weather is still partially controllable, by seeding clouds for example.(( What it means for a chaotic system to be controllable is discussed in [[https://docs.google.com/document/d/1HyRd0SyDGIG49vkCKssD2HmPnN8Gbrw1duPXrLaRL9U/edit?usp=sharing|Chaos and Intrinsic Unpredictability]])) In order to control the weather, you have to adjust what inputs you are using daily, in order to continually respond to the growing uncertainties.+The classic example of chaos in nature is the weather. Predicting the weather more than 10 days out is impossible. It is possible to make some statistical predictions, most often by looking at averages of historical weather data. Despite being chaotic, weather is still partially controllable, by seeding clouds for example.((What it means for a chaotic system to be controllable is discussed in Section 8 of the accompanying report. \\ Heninger & Johnson. //Chaos and Intrinsic Unpredictability.// AI Impacts. [[http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf]].)) In order to control the weather, you have to adjust what inputs you are using daily, in order to continually respond to the growing uncertainties.
  
 Many natural disasters are weather events, including hurricanes and droughts, so they are similarly hard to predict. Some other natural disasters are caused by chaotic systems too. Solar storms are caused by turbulence in the sun’s atmosphere. The convection in the mantle driving earthquakes and volcanoes might also be chaotic, although the Lyapunov time seems unlikely to be less than 100,000 years,((One unusually fast geological time scale is geomagnetic reversal, which happens about every 500,000 years. This is caused by turbulence in the outer core, which is less viscous than the mantle. The Lyapunov time for convection in the mantle seems likely to be tens of millions of years.)) so chaos theory does not restrict our predictions here on human-relative time scales. Volcanic eruptions typically do have precursors, and so can be predicted. Earthquakes are harder to predict, both because it is hard to measure what is happening inside a fault and because the slow dynamics of the mantle interact with a much faster time scale: how long it takes for rocks to break. Many natural disasters are weather events, including hurricanes and droughts, so they are similarly hard to predict. Some other natural disasters are caused by chaotic systems too. Solar storms are caused by turbulence in the sun’s atmosphere. The convection in the mantle driving earthquakes and volcanoes might also be chaotic, although the Lyapunov time seems unlikely to be less than 100,000 years,((One unusually fast geological time scale is geomagnetic reversal, which happens about every 500,000 years. This is caused by turbulence in the outer core, which is less viscous than the mantle. The Lyapunov time for convection in the mantle seems likely to be tens of millions of years.)) so chaos theory does not restrict our predictions here on human-relative time scales. Volcanic eruptions typically do have precursors, and so can be predicted. Earthquakes are harder to predict, both because it is hard to measure what is happening inside a fault and because the slow dynamics of the mantle interact with a much faster time scale: how long it takes for rocks to break.
Line 55: Line 57:
 Simple food chains can exhibit some interesting dynamics in the size of populations of various species.((The classic example involves a 220 year dataset of the number of lynx and snowshoe hare pelts caught by the Hudson Bay Company. \\ May. //Stability and Complexity in Model Ecosystems.// (Princeton University Press, 1974) )) More complicated food webs involving many species likely can be chaotic, although it is hard to distinguish this from changes in the population as a result of a chaos in the environment. Simple food chains can exhibit some interesting dynamics in the size of populations of various species.((The classic example involves a 220 year dataset of the number of lynx and snowshoe hare pelts caught by the Hudson Bay Company. \\ May. //Stability and Complexity in Model Ecosystems.// (Princeton University Press, 1974) )) More complicated food webs involving many species likely can be chaotic, although it is hard to distinguish this from changes in the population as a result of a chaos in the environment.
  
-Markets also involve many actors with complicated interactions, so it seems likely that there is chaos involved to some extent. Since people have incentives to look for arbitrary patterns and to respond to them, it is better to model markets as anti-inductive.((The difference between chaotic and anti-inductive systems is explained in [[https://docs.google.com/document/d/1HyRd0SyDGIG49vkCKssD2HmPnN8Gbrw1duPXrLaRL9U/edit?usp=sharing|Chaos and Intrinsic Unpredictability]].))+Markets also involve many actors with complicated interactions, so it seems likely that there is chaos involved to some extent. Since people have incentives to look for arbitrary patterns and to respond to them, it is probably better to model markets as anti-inductive.((The difference between chaotic and anti-inductive systems is explained in Section 3 of the accompanying report. \\ Heninger & Johnson. //Chaos and Intrinsic Unpredictability.// AI Impacts. [[http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf]].))
  
 Perhaps the most interesting potentially chaotic thing is the human brain. It is discussed in the next section.  Perhaps the most interesting potentially chaotic thing is the human brain. It is discussed in the next section. 
Line 65: Line 67:
 For the things for which a brain is chaotic, it is impossible to predict what in particular that brain will do. A simulation of all of the activations of all of the neurons in the brain, or a copy of the brain made as accurately as is physically possible, will not continue to accurately model the behavior of that brain for more than a second into the future. For the things for which a brain is chaotic, it is impossible to predict what in particular that brain will do. A simulation of all of the activations of all of the neurons in the brain, or a copy of the brain made as accurately as is physically possible, will not continue to accurately model the behavior of that brain for more than a second into the future.
  
-Even when predicting what in particular a brain will do is impossible, it might still be possible to make statistical predictions. Knowing the statistics would allow you to construct a probability distribution over possible future behaviors of the brain. Human-like behavior could be sampled from the distribution. It is not obvious if this could actually be done, both because the distribution could be spread over an extremely large space and because the distribution itself could also change chaotically and so be unpredictable.((This is explained in more detail in [[https://docs.google.com/document/d/1HyRd0SyDGIG49vkCKssD2HmPnN8Gbrw1duPXrLaRL9U/edit?usp=sharing|Chaos and Intrinsic Unpredictability]].)) Figuring out whether the motion of the distribution is chaotic is much harder, so this page will not make strong claims about it.+Even when predicting what in particular a brain will do is impossible, it might still be possible to make statistical predictions. Knowing the statistics would allow you to construct a probability distribution over possible future behaviors of the brain. Human-like behavior could be sampled from the distribution. It is not obvious if this could actually be done, both because the distribution could be spread over an extremely large space and because the distribution itself could also change chaotically and so be unpredictable.((This is explained in more detail in Section 7 of the accompanying report. \\ Heninger & Johnson. //Chaos and Intrinsic Unpredictability.// AI Impacts. [[http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf]].)) Figuring out whether the motion of the distribution is chaotic is much harder, so this page will not make strong claims about it.
  
-This argument might feel like it proves too much. Some aspects of human behavior are clearly predictable some of the time. There are several ways this argument should be tempered to make it consistent with this common experience: (1) For some things, the relevant parts of the human brain are not being chaotic. Behaviors which do not depend on chaotic processes in the brain are chaotic. (2) Some of the chaos in the brain might have predictable statistics. Behavior which depends on statistics which are stationary and not multistable is predictable. (3) Your own brain has a similar causal structure / coarse-graining as the person you are trying to predict. Using empathetic inference to model their behavior is more likely to result in something similar to their behavior than a model built with a very different causal structure / coarse-grainings. Even with these caveats, it still seems that there are some aspects of human behavior which are inherently unpredictable.+This argument might feel like it proves too much. Some aspects of human behavior are clearly predictable some of the time. There are several ways this argument should be tempered to make it consistent with this common experience: (1) For some things, the relevant parts of the human brain are not being chaotic. (2) Some of the chaos in the brain might have predictable statistics. Behavior which depends on statistics which are stationary and not multistable can be predictable. (3) Your own brain has a similar causal structure / coarse-graining as the person you are trying to predict. Using empathetic inference to model their behavior is more likely to result in something similar to their behavior than a model built with a very different causal structure / coarse-grainings. Even with these caveats, it still seems likely that there are some aspects of human behavior which are inherently unpredictable.
  
 The existence of chaos in many parts of the brain and in many species of animals seems to me to suggest that it is essential to some of the things a brain can do. If the chaos were not helpful, it probably would have been selected against in a lot more circumstances than we see it. The existence of chaos in many parts of the brain and in many species of animals seems to me to suggest that it is essential to some of the things a brain can do. If the chaos were not helpful, it probably would have been selected against in a lot more circumstances than we see it.
  
-There are many arguments which would be affected by learning that brains are unpredictable and uncopyable. If some of the things brains do require chaos, that would affect even more arguments. We mention a few of these here.+There are many arguments which would be affected by learning that brains are inherently unpredictable. If some of the things brains do require chaos, that would affect even more arguments. We mention a few of these here.
  
 === Biological Anchors to Bound the Difficulty of AGI === === Biological Anchors to Bound the Difficulty of AGI ===
Line 79: Line 81:
 Estimates wildly disagree as to how much compute is needed and how good of a bound this would be. Open Philanthropy has put together a summary of many of these estimates, measured in FLOP/s.((Carlsmith. //How Much Computational Power Does It Take to Match the Human Brain?// Open Philanthropy. (2020) [[https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/]].)) The first attempt we are aware of at an estimate was by von Neumann in 1958, of $10^{11}$ FLOP/s. If you assume that the brain can be modeled at the scale of neurons, then the amount of compute needed would equal the number of synapses times the neuron firing rate. Estimates for this range from $10^{12}$-$10^{17}$ FLOP/s. Accounting for some nonlinearity within individual neurons raises the estimates to $10^{16}$-$10^{19}$ FLOP/s. Zooming in further to the scale of proteins, microtubules, and dendrites results in estimates of about $10^{21}$ FLOP/s. Modeling the stochastic motion of molecules requires $10^{43}$ FLOP/s. These estimates have all involved modeling a particular mechanism for how the brain works. There are also other estimates. If you look at how much energy the brain uses and use Landauer’s Principle to estimate the amount of compute being done, you get $10^{22}$-$10^{23}$ FLOP/s. Functional methods look at how much compute is needed to do what the retina does (for example) and scale that up to the entire brain. This estimate is $10^{12}$-$10^{15}$ FLOP/s. AI Impacts has previously published an analysis which assumed that the brain is communication limited and used traversed edges per second to estimate $10^{16}$-$10^{18}$ FLOP/s. The estimates mentioned range from $10^{11}$-$10^{43}$ FLOP/s. An informal poll of neuroscientists conducted by Sandberg & Bostrom found that most experts think that the answer is likely between $10^{18}$-$10^{25}$ FLOP/s.((Sandberg & Bostrom. //Whole Brain Emulation: A Roadmap.// Future of Humanity Institute. (2008) [[https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf]].)) Estimates wildly disagree as to how much compute is needed and how good of a bound this would be. Open Philanthropy has put together a summary of many of these estimates, measured in FLOP/s.((Carlsmith. //How Much Computational Power Does It Take to Match the Human Brain?// Open Philanthropy. (2020) [[https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/]].)) The first attempt we are aware of at an estimate was by von Neumann in 1958, of $10^{11}$ FLOP/s. If you assume that the brain can be modeled at the scale of neurons, then the amount of compute needed would equal the number of synapses times the neuron firing rate. Estimates for this range from $10^{12}$-$10^{17}$ FLOP/s. Accounting for some nonlinearity within individual neurons raises the estimates to $10^{16}$-$10^{19}$ FLOP/s. Zooming in further to the scale of proteins, microtubules, and dendrites results in estimates of about $10^{21}$ FLOP/s. Modeling the stochastic motion of molecules requires $10^{43}$ FLOP/s. These estimates have all involved modeling a particular mechanism for how the brain works. There are also other estimates. If you look at how much energy the brain uses and use Landauer’s Principle to estimate the amount of compute being done, you get $10^{22}$-$10^{23}$ FLOP/s. Functional methods look at how much compute is needed to do what the retina does (for example) and scale that up to the entire brain. This estimate is $10^{12}$-$10^{15}$ FLOP/s. AI Impacts has previously published an analysis which assumed that the brain is communication limited and used traversed edges per second to estimate $10^{16}$-$10^{18}$ FLOP/s. The estimates mentioned range from $10^{11}$-$10^{43}$ FLOP/s. An informal poll of neuroscientists conducted by Sandberg & Bostrom found that most experts think that the answer is likely between $10^{18}$-$10^{25}$ FLOP/s.((Sandberg & Bostrom. //Whole Brain Emulation: A Roadmap.// Future of Humanity Institute. (2008) [[https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf]].))
  
-We have not seen any estimates for the compute needed to model the brain quantum mechanically, but it would be many orders of magnitude higher than even the stochastic molecular model, as ordinary differential equations for $N$ variables get replaced by partial differential equations over $N$ variables. Even then, the result of the calculation would be a distribution over possible states, not a prediction of what in particular will happen. Practically all neuroscientists believe that nothing in the brain requires quantum mechanics, i.e. that there exists a classical coarse-grained model which fully captures the dynamics of the brain which are relevant for human behavior. It seems plausible that no such classical model exists.((This is discussed in detail in [[uncategorized:ai_safety_arguments_affected_by_chaos:chaos_in_humans|Chaos in Humans]].)) It is possible for the brain to amplify uncertainty at any scale to the next higher scale. There are a few examples where it seems as though quantum mechanical effects are important to macroscopic biology, including in photosynthesis and bird navigation.+We have not seen any estimates for the compute needed to model the brain quantum mechanically, but it would be many orders of magnitude higher than even the stochastic molecular model, as ordinary differential equations for $N$ variables get replaced by partial differential equations over $N$ variables. Even then, the result of the calculation would be a distribution over possible states, not a prediction of what in particular will happen. Practically all neuroscientists believe that nothing in the brain requires quantum mechanics, i.e. that there exists a classical coarse-grained model which fully captures the dynamics of the brain which are relevant for human behavior. It seems plausible to me that no such classical model exists.((This is discussed in detail in [[uncategorized:ai_safety_arguments_affected_by_chaos:chaos_in_humans|Chaos in Humans]].)) It is possible for the brain to amplify uncertainty at any scale to the next higher scale. There are a few examples where it seems as though quantum mechanical effects are important to macroscopic biology, including in photosynthesis and bird navigation.
  
 One of the biggest challenges for whole brain emulation is figuring out how much resolution is needed: One of the biggest challenges for whole brain emulation is figuring out how much resolution is needed:
Line 156: Line 158:
 We previously described how designed objects tend to not be chaotic. In order to make a design, it helps if the behavior of the motion is not unpredictable. We previously described how designed objects tend to not be chaotic. In order to make a design, it helps if the behavior of the motion is not unpredictable.
  
-This suggests an example of instrumental convergence. When any intelligent being designs or plans for something, it has a bias towards reducing the amount of chaos in its environment. Less chaos means that the world is more predictable, which makes designs and plans easier to make.+This suggests an example of instrumental convergence. When any intelligent being designs or plans for something, it has a bias towards reducing the amount of chaos involved. Less chaos means that the world is more predictable, which makes designs and plans easier to make.
  
 This seems related to James C. Scott’s observation that planned forests, farms, cities, revolutions, and societies tend to be less complex than similar things that arise unplanned through lots of biological or human interactions.((Scott. //Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed.// (1998) [[https://theanarchistlibrary.org/library/james-c-scott-seeing-like-a-state]].)) This seems like a related form of instrumental convergence. This seems related to James C. Scott’s observation that planned forests, farms, cities, revolutions, and societies tend to be less complex than similar things that arise unplanned through lots of biological or human interactions.((Scott. //Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed.// (1998) [[https://theanarchistlibrary.org/library/james-c-scott-seeing-like-a-state]].)) This seems like a related form of instrumental convergence.
uncategorized/ai_safety_arguments_affected_by_chaos.1680305221.txt.gz · Last modified: 2023/03/31 23:27 by jeffreyheninger