User Tools

Site Tools


uncategorized:ai_safety_arguments_affected_by_chaos

AI Safety Arguments Affected by Chaos

Created 31 March, 2023. Last updated 31 March, 2023.

This page is under review and may be updated soon.

Chaos theory allows us to show that some predictions cannot be reliably made, even using arbitrary intelligence. Some things about human brains seem to be in that category, which affects how advanced AI might interact with humans.

Details

Background

You occasionally hear stories of the incredible power of AGI.1) You had put it in a box and were talking to it, but then it said something to you that caused you to use a slightly different tone of voice when talking to the next person you see, which led to a sequence of events no mere human could understand, until the nanofactories are built with the power to kill us all.

To craft this sort of plan requires a tremendous amount of predictive ability. Chaos theory makes this sort of argument ridiculous: no intelligence in the world could make the sorts of predictions needed to successfully implement such a plan. At various points in the chain, something will interact with something for which chaos makes its future inherently unpredictable.

This particular argument is clearly affected by understanding chaos theory, but it is not a crux for many people. This page goes through a list of arguments which are more important to AI Safety, and which might be affected by chaos theory. Upon further reflection, some of the arguments might end up being less important, or less affected by chaos, or affected by chaos in a different way than what is said here. This page is meant to be a springboard for discussion, not a list of conclusions.

Headroom Above Human Intelligence

Future AI systems might have significantly more cognitive capabilities than humans currently do. This might allow them to have close to the best possible skill level for arbitrary intellectual tasks. The difference between human intellectual abilities and the best possible intellectual abilities is the ‘headroom’ above human intelligence.

A major reason why we might care about how much headroom there is above human intelligence is to attempt to understand takeover scenarios. If a superintelligent AI were to try to wrest control of the future away from humanity, how much chance would we have of preventing it? If humans were close to the ceiling on lots of skills at this point, perhaps aided by narrow AI, then we might not have that much of a disadvantage. If humans were far from the ceiling on many important skills, then we would expect to be at a serious disadvantage.2) An argument to this effect has been made by Barak and Edelman, and is discussed below.

For many tasks, having a high skill level requires you to be able to make predictions about what will happen in the future, contingent on what choices you choose to make. Chaos theory provides a way to prove that making reliable predictions about certain things is impossible for an arbitrary intelligence given a small amount of initial uncertainty. For these predictions, the skill ceiling is low. It is often still possible to make a prediction about something, perhaps the statistics of the motion or perhaps something else even less related to the original question, but not to predict what exactly will happen in the future.3) Whenever predicting chaotic motion is important, we should not expect that AI will be able to perform arbitrarily well.

Along with proving the existence of some skill ceilings, chaos theory might also help us better understand the skill landscape: How much intelligence is required to achieve a particular skill level? Knowing what the steepness of this slope is (the marginal improvement of skill with additional intelligence) would inform expectations of takeoff speed, and what we might expect from interactions with AIs which are slightly more intelligent than humans.

How close are humans to the optimal skill level?

The answer obviously depends on what skill we are looking at. Some tasks are so easy that humans are already nearly perfect at them. Other tasks are more difficult and it is clear that humans are far from the optimal skill level. There is also a third kind of task which is so difficult that it is impossible for any intelligent being in this world to perform them well.

For examples of these three kinds of skills, we can look at the games of tic-tac-toe, chess, and pinball, respectively. Planning what your next move should be in tic-tac-toe is trivial: almost everyone learns how to always force a draw as a child. In chess, it is much harder to figure out what you should do, but still possible. Humans are clearly not performing optimally: the most capable artificial intelligence has been better at chess than the most capable human since 1997. Planning your next move in pinball, however, is impossible if the ball bounces more than a few times before returning to your flippers, because the motion of the pinball is chaotic.4) This is true for either human players or for an arbitrary artificial intelligence. To perform better than a human, an artificial intelligence would have to rely on faster reaction time, or on hitting the ball in a way that avoids the most chaotic parts of the board.

Barak and Edelman

If it is often the case that the best possible skill level is low, then even an extremely intelligent AI might not be significantly more capable than humans at many important things. One version of this argument was made by Boaz Barak and Ben Edelman.5) They claim that the amount of skill needed to reach optimal performance is higher for short-term goals, where detailed prediction is still possible, than it is for long-term goals, where the best abilities rely on heuristics instead. They provide as evidence data from Sweden indicating that doctors, lawyers/judges, and economists/political scientists have higher cognitive abilities than mayors, parliamentarians, and CEOs. They interpret this to mean that intelligence is more important for jobs involving shorter than longer time horizons. If this model is true, then an advanced AI running a company may not have an advantage over a CEO with AI Advisors for shorter-time tasks.

This argument seems to point towards something interesting, but needs some modifications. ‘Time horizon’ should include some notion of how chaotic or complex the system is. More importantly, the skill needed to reach optimal performance does not have to decline for chaotic systems. Figuring out the heuristics and how to use them is a difficult task. Instead, what declines is the marginal benefit of increasing intelligence. If there are any tradeoffs between intelligence and some other skills useful for CEOs (e.g. charisma), then this would explain the Swedish data as well.

Their conclusion that humans aided by narrow AI could effectively compete with superintelligent AI seems to me to be unlikely to be true. There are lots of things which humans are bad at which do not seem to be inherently unpredictable and intelligence gives some advantage even when there is chaos. Their argument suggests that the difference in skill is smaller than you might expect, but does not show that it is close to zero.

Things We Cannot Predict Because of Chaos

There are many things that humans have difficulty predicting. For some of these things, better predictions are possible, if only we were more intelligent. For others, there are physical reasons why we are incapable of predicting them. If there is something that we cannot predict because of chaos, an arbitrarily intelligent AI would not be able to predict it either.

The classic example of chaos in nature is the weather. Predicting the weather more than 10 days out is impossible. It is possible to make some statistical predictions, most often by looking at averages of historical weather data. Despite being chaotic, weather is still partially controllable, by seeding clouds for example.6) In order to control the weather, you have to adjust what inputs you are using daily, in order to continually respond to the growing uncertainties.

Many natural disasters are weather events, including hurricanes and droughts, so they are similarly hard to predict. Some other natural disasters are caused by chaotic systems too. Solar storms are caused by turbulence in the sun’s atmosphere. The convection in the mantle driving earthquakes and volcanoes might also be chaotic, although the Lyapunov time seems unlikely to be less than 100,000 years,7) so chaos theory does not restrict our predictions here on human-relative time scales. Volcanic eruptions typically do have precursors, and so can be predicted. Earthquakes are harder to predict, both because it is hard to measure what is happening inside a fault and because the slow dynamics of the mantle interact with a much faster time scale: how long it takes for rocks to break.

Many of the best examples of chaos involve fluids, because we understand them well and so we can prove that their motion is chaotic. There are many other things which are high-dimensional8) and which seem unpredictable, although it is hard to prove that they are chaotic.

Traffic modeling is often done using equations from fluid mechanics. It could involve some chaotic traffic flow rates, although city planners try to keep this from happening and make the traffic flow smoothly.9)

Simple food chains can exhibit some interesting dynamics in the size of populations of various species.10) More complicated food webs involving many species likely can be chaotic, although it is hard to distinguish this from changes in the population as a result of a chaos in the environment.

Markets also involve many actors with complicated interactions, so it seems likely that there is chaos involved to some extent. Since people have incentives to look for arbitrary patterns and to respond to them, it is probably better to model markets as anti-inductive.11)

Perhaps the most interesting potentially chaotic thing is the human brain. It is discussed in the next section.

Whole Brain Emulation

Individual neurons, small networks of neurons, and networks of neurons in sensory organs can all show either chaotic or non-chaotic behavior in different environments or in response to different stimuli. EEG measurements of the entire brain behave unpredictably, although it is hard to distinguish chaos from noise except in a few unusual circumstances (like an epileptic seizure). The Lyapunov time for neuron-based chaos is typically less than a second.12)

For the things for which a brain is chaotic, it is impossible to predict what in particular that brain will do. A simulation of all of the activations of all of the neurons in the brain, or a copy of the brain made as accurately as is physically possible, will not continue to accurately model the behavior of that brain for more than a second into the future.

Even when predicting what in particular a brain will do is impossible, it might still be possible to make statistical predictions. Knowing the statistics would allow you to construct a probability distribution over possible future behaviors of the brain. Human-like behavior could be sampled from the distribution. It is not obvious if this could actually be done, both because the distribution could be spread over an extremely large space and because the distribution itself could also change chaotically and so be unpredictable.13) Figuring out whether the motion of the distribution is chaotic is much harder, so this page will not make strong claims about it.

This argument might feel like it proves too much. Some aspects of human behavior are clearly predictable some of the time. There are several ways this argument should be tempered to make it consistent with this common experience: (1) For some things, the relevant parts of the human brain are not being chaotic. (2) Some of the chaos in the brain might have predictable statistics. Behavior which depends on statistics which are stationary and not multistable can be predictable. (3) Your own brain has a similar causal structure / coarse-graining as the person you are trying to predict. Using empathetic inference to model their behavior is more likely to result in something similar to their behavior than a model built with a very different causal structure / coarse-grainings. Even with these caveats, it still seems likely that there are some aspects of human behavior which are inherently unpredictable.

The existence of chaos in many parts of the brain and in many species of animals seems to me to suggest that it is essential to some of the things a brain can do. If the chaos were not helpful, it probably would have been selected against in a lot more circumstances than we see it.

There are many arguments which would be affected by learning that brains are inherently unpredictable. If some of the things brains do require chaos, that would affect even more arguments. We mention a few of these here.

Biological Anchors to Bound the Difficulty of AGI

Whole brain emulation has been used as a bound on how much compute is needed for AGI. If you were to have a complete model of a human brain, it should be able to do everything that a brain can do.

Estimates wildly disagree as to how much compute is needed and how good of a bound this would be. Open Philanthropy has put together a summary of many of these estimates, measured in FLOP/s.14) The first attempt we are aware of at an estimate was by von Neumann in 1958, of $10^{11}$ FLOP/s. If you assume that the brain can be modeled at the scale of neurons, then the amount of compute needed would equal the number of synapses times the neuron firing rate. Estimates for this range from $10^{12}$-$10^{17}$ FLOP/s. Accounting for some nonlinearity within individual neurons raises the estimates to $10^{16}$-$10^{19}$ FLOP/s. Zooming in further to the scale of proteins, microtubules, and dendrites results in estimates of about $10^{21}$ FLOP/s. Modeling the stochastic motion of molecules requires $10^{43}$ FLOP/s. These estimates have all involved modeling a particular mechanism for how the brain works. There are also other estimates. If you look at how much energy the brain uses and use Landauer’s Principle to estimate the amount of compute being done, you get $10^{22}$-$10^{23}$ FLOP/s. Functional methods look at how much compute is needed to do what the retina does (for example) and scale that up to the entire brain. This estimate is $10^{12}$-$10^{15}$ FLOP/s. AI Impacts has previously published an analysis which assumed that the brain is communication limited and used traversed edges per second to estimate $10^{16}$-$10^{18}$ FLOP/s. The estimates mentioned range from $10^{11}$-$10^{43}$ FLOP/s. An informal poll of neuroscientists conducted by Sandberg & Bostrom found that most experts think that the answer is likely between $10^{18}$-$10^{25}$ FLOP/s.15)

We have not seen any estimates for the compute needed to model the brain quantum mechanically, but it would be many orders of magnitude higher than even the stochastic molecular model, as ordinary differential equations for $N$ variables get replaced by partial differential equations over $N$ variables. Even then, the result of the calculation would be a distribution over possible states, not a prediction of what in particular will happen. Practically all neuroscientists believe that nothing in the brain requires quantum mechanics, i.e. that there exists a classical coarse-grained model which fully captures the dynamics of the brain which are relevant for human behavior. It seems plausible to me that no such classical model exists.16) It is possible for the brain to amplify uncertainty at any scale to the next higher scale. There are a few examples where it seems as though quantum mechanical effects are important to macroscopic biology, including in photosynthesis and bird navigation.

One of the biggest challenges for whole brain emulation is figuring out how much resolution is needed:

Scale Separation is a key challenge for the WBE project. Does there exist a scale in the brain where finer interactions average out? If no such scale exists then the feasibility of WBE is much in doubt: no method with a finite-size cutoff would achieve emulation.17)

This is the question addressed in Chaos in Humans. In order for finer interactions to average out, the scale at which you are taking the average must not be chaotic. Both chaotic and non-chaotic behavior seems to be possible at all scales of the brain. At least for some subsystems and in some circumstances, there is no mesoscale where we can get predictable averages.

An update from learning that parts of a brain are chaotic is that it increases the difficulty of whole brain emulation. If the behavior of an individual neuron is chaotic in ways relevant to understanding human behavior, then we cannot coarse-grain that part of the brain at the level of neurons. If the behavior of a type of protein amplifies smaller uncertainty, then we cannot coarse-grain that part of the cell at the level of proteins. Chaotic behavior at microscopic scales, and at all intermediate mesoscales, amplifies uncertainty all the way from the atomic scale to the macroscopic, which would imply that emulating the future behavior of a particular brain is impossible.

Heuristics vs Models

Currently, the best way we have to predict someone else’s behavior is “empathetic inference: by configuring your own brain in a similar state to the brain you want to predict.”18) This analogue computation leads to a bunch of heuristics about how other people tend to behave. Some people’s heuristics are better than others, and there might exist even better heuristics that could be discovered. These heuristics are a form of statistical prediction about the behavior of humans, which are only partially reliable, but still might be the best predictions possible.

You might think that having a simulation of all of the neurons in a brain would allow you to make better predictions about someone else’s behavior than using the best available heuristics. This is not clear to me. Consider an analogy with weather prediction: For times longer than the Lyapunov time, using a more detailed model of the atmosphere results in worse predictions than looking at the statistics of historical weather data. The heuristics from empathetic inference might be similar in that they are better than a more detailed model for longer than the Lyapunov time (less than one second).

Hacking Humans

Some people seem to think that a superintelligence could become arbitrarily good at persuading us.19) But if the best models of humans are heuristics, then it might be harder to manipulate us.

One way a superintelligence might try to manipulate someone is by emulating that person and seeing how they respond to a variety of prompts. The superintelligence could then choose the action that it knows has the effect it most prefers. This kind of persuasion is impossible if whole brain emulation is impossible.

Adversarial attacks on humans also seem less likely if the precise response to a particular image or text is inherently unpredictable.

Chaos theory points to humans being less hackable than we might otherwise have thought. An obvious empirical check (which we have not done yet) is to look at the most capable brainwashing that humans can perform on each other. A superintelligence might be better than us at this, but it seems plausible that the ceiling here is fairly low, and that the way to reach the ceiling is not by emulating the human it is interacting with.

Identity Arguments

Some of the arguments that have been suggested that a superintelligence might use to manipulate humans seem particularly suspect.

A superintelligence might threaten to trap you in a simulation.20) It might simulate a large number of copies of you in exactly this situation and threaten to torture them if you do not obey it. You might not know which of the copies or original you are, so you obey to avoid being tortured. This argument depends on the superintelligence being able to simulate good enough copies of you for you not know which one is the original.

Acausal trade also seems like it would be harder.21) In order to conduct an acausal trade, you have to predict the behavior of the other agent well enough to know that they will follow through on the agreement. This restricts the possible acausal trades to only those for which the other agent’s behavior is not determined by chaos. Acausal trades or norms which do not require detailed models of other agents can still be pursued.22)

There are also some related arguments that seem relevant for futurism, but not directly for AI Safety. The common thread centers on personal identity. If a human is not reinitializable or copyable, that affects how we think about selfhood. Mind uploading would be problematic: the uploaded mind will not continue to behave the same as the mind would have behaved if it had remained on the brain. Teleportation by measuring your complete state and reconfiguring it someplace else would have a similar problem: the new body would not continue to behave in the same way as the old body would have in similar circumstances. Cryonics depends on there being a way to reinitialize the mind to a state that it was previously in, even though the dynamics stops in between. We might decide that we do not care about the philosophical notion of selfhood and decide to do these things anyways, but it seems worth noting the different behavior after reinitializing or copying.

Speeding Up Brains

Predicting what brains can do might also be useful for speeding up brain activity. If you can continually monitor the state of the neurons, predict what their next action will be, and feed that action back into them faster than they would have done it themselves, the result would be an increase in the speed of human thinking.23) This seems to only involve short-time predictions, so it is not ruled out by chaos theory, although there are other reasons why it might not work.

Things Computers Cannot Do?

If there are some things that a brain does which require chaotic amplification of quantum effects, then a classical computer might be unable to replicate them. This would suggest a major barrier for artificial general intelligence. Not necessarily to transformative artificial intelligence, because something could be transformative without being completely general.

The brain seems to be capable of both chaotic and non-chaotic behavior. This suggests that both are important. If chaos were not important to anything that a brain does, it would be surprising that we find it at many scales and in many places within brains.

Empirically, whole brain emulation of even simple brains has not been achieved. Caenorhabditis elegans has 302 neurons and all of the connections between them have been known since 1986. We have not yet been able to create a simulation of a worm that behaves like C. elegans.24) This seems to partially be because of a lack of funding,25) although it also suggests that whole brain emulation is harder than we previously thought.

If the possibility of whole brain emulation was a major reason you believed that AGI is possible, learning that whole brain emulation is harder than you thought, or plausibly impossible, should reduce your confidence that AGI is possible.

Quantum Mechanics and Consciousness

Some people have suggested that consciousness is somehow related to quantum mechanics.26) A detailed version of this is Penrose’s uncomputable quantum gravity, which contains multiple novel ideas and seems likely to be at least partially wrong. Scott Aaronson offers much vaguer intuition in that direction:

Personally, I dissent a bit from the consensus of most of my friends and colleagues, in that I do think there’s something strange and mysterious about consciousness — something that we conceivably might understand better in the future, but that we don’t understand today, much as we didn’t understand life before Darwin. I even think it’s worth asking, at least, whether quantum mechanics, thermodynamics, mathematical logic, or any of the other deepest things we’ve figured out could shed any light on the mystery. I’m with Roger [Penrose] about all of this: about the questions, that is, if not about his answers.27)

In order for consciousness to be related to quantum mechanics, there has to be a way to amplify quantum mechanical effects to the macroscopic scale. If there is no amplification of quantum effects, then consciousness (which occurs on macroscopic time and scale scales) cannot be a quantum phenomenon. Chaos in the brain is necessary, but not sufficient, for these arguments.

Miscellaneous

Natural Abstractions

John Wentworth’s idea of natural abstractions also invokes chaos theory.28) A natural abstraction is defined as something which continues to be relevant “far away”. For example, the natural abstractions of molecules in a gas are thermodynamic quantities like temperature and pressure. These are the properties of the microscopic motion of the molecules which are still observable at the macroscopic scale.

Chaos theory helps us to identify abstractions. If there are some averages or other properties of the chaotic motion which are predictable, these are natural abstractions. The chaos will render unpredictable other potential measurements, so these are not a natural abstraction.

Semiotic Physics

Language models can themselves be thought of as dynamical systems. The dynamics describe the probabilities of transitioning from one token to another. The result is known as semiotic physics, “the study of the fundamental forces and laws that govern the behavior of signs and symbols.”29) This analogy (hopefully) allows us to use results of dynamical systems theory to better understand the behavior of language models.

Instrumental Convergence

We previously described how designed objects tend to not be chaotic. In order to make a design, it helps if the behavior of the motion is not unpredictable.

This suggests an example of instrumental convergence. When any intelligent being designs or plans for something, it has a bias towards reducing the amount of chaos involved. Less chaos means that the world is more predictable, which makes designs and plans easier to make.

This seems related to James C. Scott’s observation that planned forests, farms, cities, revolutions, and societies tend to be less complex than similar things that arise unplanned through lots of biological or human interactions.30) This seems like a related form of instrumental convergence.

Technology Currently Enabled by Chaos

There are a few things that humans design which make use of chaos. We should not expect a completely engineered world to be completely devoid of chaos.

One category of this sort of technology is physical random number generators. Dice and Galton boards take advantage of sensitive dependence on initial conditions to select numbers that could not have been known beforehand.31)

Certain games, like pinball, are chaotic. This changes the character of the game, and makes them more about reacting to new information than about planning your future moves.

Some laser cavities are designed so that the light bounces around chaotically and ergodically, which spreads the energy out evenly and prevents it from being concentrated at any particular point, or which has non-ergodic chaos, which broadens the resonance and anisotropic emission.32)

Chaos is most unavoidable when engineering with fluids. Solid objects can be forced to run on a track, for example, to keep them from moving chaotically. In contrast, a fluid’s motion cannot be fully constrained. Most of the time, engineers try to minimize the turbulence to keep everything flowing smoothly, but there are some counterexamples. Vortex shedding off the tips of the wings is necessary for an airplane’s lift, although we try to have most of the turbulence behind the plane to keep the wings from fluttering.33) The flow rate through a pipe is not monotonic as the pressure difference between the two ends increases, which seems like it could have some applications. Turbulent flows with suspended sediment deposit that sediment more rapidly than smooth flows,34) so this can be used as an initial stage of water purification, along with coagulation and flocculation.

The most common way turbulence is useful is for mixing fluids.This is true both for stirring your lemonade (reversing the direction of the stirring creates turbulence that mixes in the sugar more rapidly) and for industrial scale processes.35)

Conclusion

The AI Safety arguments most affected by chaos seem to involve the headroom above human intelligence and whole brain emulation:

  • Headroom Above Human Intelligence. Chaos theory allows us to prove that certain predictions are impossible to make reliably, regardless of how much intelligence is used. There is a low ceiling on these predictions and humans can know how close we are to them. This argument might not be as powerful as it initially appears because (1) many of the things we care about are dynamical systems involving many variables, for which it is hard to prove whether and when they are chaotic, (2) even if something is naturally chaotic, in some cases it is possible to engineer the system to remove the chaos or make it irrelevant to the things we care about, and (3) under some circumstances it is possible to control which pattern of chaotic motion will occur. Checking if something is truly unpredictable requires carefully checking how chaotic it is and whether that chaos can be engineered around or controlled. Some systems will prove to be truly unpredictable, although this fewer than the set of all chaotic systems.
  • Whole Brain Emulation. The human brain is an important thing which could be, in some aspects, inherently unpredictable. At all scales, the brain is capable of either chaotic or non-chaotic behavior. This could allow atomic-scale uncertainty from within the brain to be amplified until it impacts that human’s behavior. Detailed emulation of human brains might end up not being more useful to predict human behavior than heuristics. Hacking humans using adversarial attacks might not be possible. Various questions about personal identity might end up harder or easier to resolve. This has all been based on the observation that human brains are chaotic; we might also ask why they are chaotic. If the answer is that there is strong selection pressure towards it, because chaotic amplification of atomic-scale uncertainty is necessary for some useful skill that humans have, then we might expect this skill to be much harder or impossible to achieve on a computer.

Other arguments seem to have already incorporated some ideas from chaos theory, such as Natural Abstractions, or are less clear how they might lead to decision-relevant updates, such as Instrumental Convergence. Further discussion might cause us to revise how important chaos is to some of these arguments, or it might reveal new ways that chaos theory is relevant to AI Safety.

Primary authors: Jeffrey Heninger and Aysja Johnson.

Notes

1)
I have not found an example of this in writing, so I’m relying on oral tradition in a community that doesn’t value oral tradition.
2)
Eliezer Yudkowsky. My Childhood Role Model. LessWrong. (2008) https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model.
3)
What sorts of predictions can and cannot be made when there is chaos is discussed in Section 7 of the accompanying report.
Heninger & Johnson. Chaos and Intrinsic Unpredictability. AI Impacts. http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf.
4)
The pinball example is discussed in depth in You Can’t Predict a Game of Pinball.
5)
Barak & Edelman. AI will change the world, but won’t take it over by playing “3-dimensional chess”. Alignment Forum. (2022) https://www.alignmentforum.org/posts/zB3ukZJqt3pQDw9jz/ai-will-change-the-world-but-won-t-take-it-over-by-playing-3.
6)
What it means for a chaotic system to be controllable is discussed in Section 8 of the accompanying report.
Heninger & Johnson. Chaos and Intrinsic Unpredictability. AI Impacts. http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf.
7)
One unusually fast geological time scale is geomagnetic reversal, which happens about every 500,000 years. This is caused by turbulence in the outer core, which is less viscous than the mantle. The Lyapunov time for convection in the mantle seems likely to be tens of millions of years.
8)
High-dimensional dynamical systems require a large number of variables to model their behavior. The variables could be the positions of some particles, but they can also be electrical potentials across the cell membranes of neurons, or something entirely different.
9)
Disbro & Frame. Traffic flow theory and chaotic behavior. Engineering Research and Development Bureau: New York State Department of Transportation: Special Report 91. (1989) https://rosap.ntl.bts.gov/view/dot/15604.
10)
The classic example involves a 220 year dataset of the number of lynx and snowshoe hare pelts caught by the Hudson Bay Company.
May. Stability and Complexity in Model Ecosystems. (Princeton University Press, 1974)
11)
The difference between chaotic and anti-inductive systems is explained in Section 3 of the accompanying report.
Heninger & Johnson. Chaos and Intrinsic Unpredictability. AI Impacts. http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf.
12)
The evidence for this claim is described in Chaos in Humans.
13)
This is explained in more detail in Section 7 of the accompanying report.
Heninger & Johnson. Chaos and Intrinsic Unpredictability. AI Impacts. http://aiimpacts.org/wp-content/uploads/2023/04/Chaos-and-Intrinsic-Unpredictability.pdf.
14)
Carlsmith. How Much Computational Power Does It Take to Match the Human Brain? Open Philanthropy. (2020) https://www.openphilanthropy.org/research/how-much-computational-power-does-it-take-to-match-the-human-brain/.
15)
Sandberg & Bostrom. Whole Brain Emulation: A Roadmap. Future of Humanity Institute. (2008) https://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf.
16)
This is discussed in detail in Chaos in Humans.
17)
Sandberg. Feasibility of whole brain emulation. Theory and Philosophy of Artificial Intelligence. (2013) http://diyhpl.us/~bryan/papers2/neuro/Feasibility%20of%20whole%20brain%20emulation%20-%20Anders%20Sandberg.pdf.
18)
Yudkowsky. The Comedy of Behaviorism. LessWrong. (2008) https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism.
19)
Armstrong. Hacking humans. LessWrong. (2017) https://www.lesswrong.com/posts/ieW77AMhBkBqtfERZ/hacking-humans.
Noosphere. How easy/fast is it for a AGI to hack computers/a human brain? LessWrong. (2022) https://www.lesswrong.com/posts/KRABWFmryhD2HdxqW/how-easy-fast-is-it-for-a-agi-to-hack-computers-a-human.
20)
Armstrong. The AI in a box boxes you. LessWrong. (2010) https://www.lesswrong.com/posts/c5GHf2kMGhA4Tsj4g/the-ai-in-a-box-boxes-you.
21)
XiXiDu. AI-Box Experiment - The Acausal Trade Argument. LessWrong. (2011) https://www.lesswrong.com/posts/DYcXRiJWiAtbXxNA5/ai-box-experiment-the-acausal-trade-argument.
22)
Critch. Acausal Normalcy. LessWrong. (2023) https://www.lesswrong.com/posts/3RSq3bfnzuL3sp46J/acausal-normalcy.
23)
Viteri et al. Research Direction: Be the AGI you want to see in the world. LessWrong. (2023) https://www.lesswrong.com/posts/FnfAnsAH6dva3kCHS/research-direction-be-the-agi-you-want-to-see-in-the-world.
24)
Niconiconi. Whole Brain Emulation: No Progress on C. elegans After 10 Years. LessWrong. (2021) https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years.
25)
This seems like a solvable problem. Knowing if whole brain emulation is possible for C. elegans seems like it would be worth more a million dollars to some people.
26)
Penrose. The Emperor’s New Mind. Oxford University Press. (1989)
27)
Aaronson. “Can computers become conscious?”: My reply to Roger Penrose. Shtetl-Optimized. (2016) https://scottaaronson.blog/?p=2756&fbclid=IwAR130fKpR5zCv2sd33Hu84mX2hwXw1Pq-PP3vg0PrzZeTVTNjLL-wwiKwhs.
See also: Aaronson. The Ghost in the Quantum Turing Machine. (2013) https://arxiv.org/pdf/1306.0159.pdf.
28)
Wentworth. Chaos Induces Abstractions. LessWrong. (2021) https://www.lesswrong.com/posts/zcCtQWQZwTzGmmteE/chaos-induces-abstractions.
30)
Scott. Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed. (1998) https://theanarchistlibrary.org/library/james-c-scott-seeing-like-a-state.
31)
Galton. Dice for Statistical Experiments. Nature 1070.42. (1890) p. 13-14. https://galton.org/essays/1890-1899/galton-1890-dice.pdf.
32)
Nöckel & Stone. Ray and wave chaos in asymmetric resonant optical cavities. Nature 385.45. (1997) https://arxiv.org/pdf/chao-dyn/9806017.pdf.
33)
Wake Turbulence. What-when-how. Accessed March 3, 2023. http://what-when-how.com/flight/wake-turbulence/.
34)
Partheniades & Mehta. Deposition of Fine Sediments in Turbulent Flows. US EPA. Project #16050 ERS. (1971) https://nepis.epa.gov/Exe/ZyNET.exe/9100GZ6G.txt.
35)
Brodkey. Turbulence in Mixing Operations. Academic Press. (1975)
uncategorized/ai_safety_arguments_affected_by_chaos.txt · Last modified: 2023/04/12 22:32 by katjagrace