User Tools

Site Tools


featured_articles:glossary_of_ai_risk_terminology_and_common_ai_terms

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

featured_articles:glossary_of_ai_risk_terminology_and_common_ai_terms [2022/09/21 07:37] (current)
Line 1: Line 1:
 +====== Glossary of AI Risk Terminology and common AI terms ======
 +
 +// Published 30 October, 2015; last updated 21 January, 2022 //
 +
 +
 +===== Terms =====
 +
 +
 +==== A ====
 +
 +
 +=== AI timeline ===
 +
 +
 +<HTML>
 +<p>An expectation about how much time will lapse before important AI events, especially the advent of <em><a href="/doku.php?id=clarifying_concepts:human-level_ai">human-level AI</a></em> or a similar milestone. The term can also refer to the actual periods of time (which are not yet known), rather than an expectation about them.</p>
 +</HTML>
 +
 +
 +=== Artificial General Intelligence (also, AGI) ===
 +
 +
 +<HTML>
 +<p>Skill at performing intellectual tasks across at least the range of variety that a human being is capable of. As opposed to skill at certain specific tasks (‘narrow’ AI). That is, synonymous with the more ambiguous <em>Human-Level AI</em> for some meanings of the latter<em>.</em></p>
 +</HTML>
 +
 +
 +=== Artificial Intelligence (also, AI) ===
 +
 +
 +<HTML>
 +<p>Behavior characteristic of human minds exhibited by man-made machines, and also the area of research focused on developing machines with such behavior. Sometimes used informally to refer to <em>human-level AI</em> or another strong form of AI not yet developed.</p>
 +</HTML>
 +
 +
 +=== Associative value accretion ===
 +
 +
 +<HTML>
 +<p>A hypothesized approach to value learning in which the AI acquires values using some machinery for synthesizing appropriate new values as it interacts with its environment, inspired by the way humans appear to acquire values (Bostrom 2014, p189-190)<span class="easy-footnote-margin-adjust" id="easy-footnote-1-358"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-1-358" title="Bostrom, Nick. &lt;em&gt;Superintelligence: Paths, Dangers, Strategies&lt;/em&gt;. 1st edition. Oxford: Oxford University Press, 2014."><sup>1</sup></a></span>.</p>
 +</HTML>
 +
 +
 +=== Anthropic capture ===
 +
 +
 +<HTML>
 +<p>A hypothesized control method in which the AI thinks it might be in a simulation, and so tries to behave in ways that will be rewarded by its simulators (Bostrom 2014 p134).</p>
 +</HTML>
 +
 +
 +=== Anthropic reasoning ===
 +
 +
 +<HTML>
 +<p>Reaching beliefs (posterior probabilities) over states of the world and your location in it, from priors over possible physical worlds (without your location specified) and evidence about your own situation. For an example where this is controversial, see <a href="https://en.wikipedia.org/wiki/Sleeping_Beauty_problem">The Sleeping Beauty Problem</a>. For more on the topic and its relation to AI, see <a href="https://meteuphoric.wordpress.com/anthropic-principles/">here</a>.</p>
 +</HTML>
 +
 +
 +=== Augmentation ===
 +
 +
 +<HTML>
 +<p>An approach to obtaining a superintelligence with desirable motives that consists of beginning with a creature with desirable motives (eg, a human), then making it smarter, instead of designing good motives from scratch (Bostrom 2014, p142).</p>
 +</HTML>
 +
 +
 +==== B ====
 +
 +
 +=== Backpropagation ===
 +
 +
 +<HTML>
 +<p>A fast method of computing the derivative of cost with respect to different parameters in a network, allowing for training neural nets through gradient descent. See <a href="http://neuralnetworksanddeeplearning.com/chap2.html">Neural Networks and Deep Learning</a><span class="easy-footnote-margin-adjust" id="easy-footnote-2-358"></span><span class="easy-footnote"><a href="#easy-footnote-bottom-2-358" title='Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. &lt;a href="http://neuralnetworksanddeeplearning.com/"&gt;http://neuralnetworksanddeeplearning.com&lt;/a&gt;.'><sup>2</sup></a></span> for a full explanation.</p>
 +</HTML>
 +
 +
 +=== Boxing ===
 +
 +
 +<HTML>
 +<p>A control method that consists of constructing the AI’s environment so as to minimize interaction between the AI and the outside world. (Bostrom 2014, p129).</p>
 +</HTML>
 +
 +
 +==== C ====
 +
 +
 +=== Capability control methods ===
 +
 +
 +<HTML>
 +<p>Strategies for avoiding undesirable outcomes by limiting what an AI can do (Bostrom 2014, p129).</p>
 +</HTML>
 +
 +
 +=== Cognitive enhancement ===
 +
 +
 +<HTML>
 +<p>Improvements to an agent’s mental abilities.</p>
 +</HTML>
 +
 +
 +=== Collective superintelligence ===
 +
 +
 +<HTML>
 +<p>“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system” (Bostrom 2014, p54).</p>
 +</HTML>
 +
 +
 +=== Computation ===
 +
 +
 +<HTML>
 +<p>A sequence of mechanical operations intended to shed light on something other than this mechanical process itself, through an established relationship between the process and the object of interest.</p>
 +</HTML>
 +
 +
 +=== The common good principle ===
 +
 +
 +<HTML>
 +<p>“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals” (Bostrom 2014, p254).</p>
 +</HTML>
 +
 +
 +=== Crucial consideration ===
 +
 +
 +<HTML>
 +<p>An idea with the potential to change our views substantially, such as by reversing the sign of the desirability of important interventions.</p>
 +</HTML>
 +
 +
 +==== D ====
 +
 +
 +=== Decisive strategic advantage ===
 +
 +
 +<HTML>
 +<p>Strategic superiority (by technology or other means) sufficient to enable an agent to unilaterally control most of the resources of the universe.</p>
 +</HTML>
 +
 +
 +=== Direct specification ===
 +
 +
 +<HTML>
 +<p>An approach to the control problem in which the programmers figure out what humans value, and code it into the AI (Bostrom 2014, p139-40).</p>
 +</HTML>
 +
 +
 +=== Domesticity ===
 +
 +
 +<HTML>
 +<p>An approach to the control problem in which the AI is given goals that limit the range of things it wants to interfere with (Bostrom 2014, p140-1).</p>
 +</HTML>
 +
 +
 +==== E ====
 +
 +
 +=== Emulation modulation ===
 +
 +
 +<HTML>
 +<p>Starting with brain emulations with approximately normal human motivations (see ‘Augmentation’), and modifying their motivations using drugs or digital drug analogs.</p>
 +</HTML>
 +
 +
 +=== Evolutionary selection approach to value learning ===
 +
 +
 +<HTML>
 +<p>A hypothesized approach to the value learning problem which obtains an AI with desirable values by iterative selection, the same way evolutionary selection produced humans  (Bostrom 2014, p187-8).</p>
 +</HTML>
 +
 +
 +=== Existential risk ===
 +
 +
 +<HTML>
 +<p>Risk of an adverse outcome that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential <a href="http://www.nickbostrom.com/existential/risks.html">(Bostrom 2002)</a></p>
 +</HTML>
 +
 +
 +==== F ====
 +
 +
 +=== Feature ===
 +
 +
 +<HTML>
 +<p>A dimension in the vector space of activations in a single layer of a neural network (i.e. a neuron activation or linear combination of activations of different neurons)</p>
 +</HTML>
 +
 +
 +=== First principal-agent problem ===
 +
 +
 +<HTML>
 +<p>The well-known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called ‘the principal agent problem’).</p>
 +</HTML>
 +
 +
 +==== G ====
 +
 +
 +=== Genie ===
 +
 +
 +<HTML>
 +<p>An AI that carries out a high level command, then waits for another (Bostrom 2014, p148).</p>
 +</HTML>
 +
 +
 +==== H ====
 +
 +
 +=== Hardware overhang ===
 +
 +
 +<HTML>
 +<p>A situation where large amounts of hardware being used for other purposes become available for AI, usually posited to occur when AI reaches human-level capabilities.</p>
 +</HTML>
 +
 +
 +=== Human-level AI ===
 +
 +
 +<HTML>
 +<p>An AI that matches human capabilities in virtually every domain of interest.  Note that this term is used ambiguously; see <a href="/doku.php?id=clarifying_concepts:human-level_ai">our page on human-level AI</a>.  </p>
 +</HTML>
 +
 +
 +=== Human-level hardware ===
 +
 +
 +<HTML>
 +<p>Hardware that matches the information-processing ability of the human brain.</p>
 +</HTML>
 +
 +
 +=== Human-level software ===
 +
 +
 +<HTML>
 +<p>Software that matches the algorithmic efficiency of the human brain, for doing the tasks the human brain does.</p>
 +</HTML>
 +
 +
 +==== I ====
 +
 +
 +=== Impersonal perspective ===
 +
 +
 +<HTML>
 +<p>The view that one should act in the best interests of everyone, including those who may be brought into existence by one’s choices (see Person-affecting perspective).</p>
 +</HTML>
 +
 +
 +=== Incentive methods ===
 +
 +
 +<HTML>
 +<p>Strategies for controlling an AI that consist of setting up the AI’s environment such that it is in the AI’s interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents (Bostrom 2014, p131).</p>
 +</HTML>
 +
 +
 +=== Incentive wrapping ===
 +
 +
 +<HTML>
 +<p>Provisions in the goals given to an AI that allocate extra rewards to those who helped bring the AI about  (Bostrom 2014, p222-3).</p>
 +</HTML>
 +
 +
 +=== Indirect normativity ===
 +
 +
 +<HTML>
 +<p>An approach to the control problem in which we specify a way to specify what we value, instead of specifying what we value directly (Bostrom, p141-2).</p>
 +</HTML>
 +
 +
 +=== Instrumental convergence thesis ===
 +
 +
 +<HTML>
 +<p>We can identify ‘convergent instrumental values’. That is, subgoals that are useful for a wide range of more fundamental goals, and in a wide range of situations (Bostrom 2014, p109).</p>
 +</HTML>
 +
 +
 +=== Intelligence explosion ===
 +
 +
 +<HTML>
 +<p>A hypothesized event in which an AI rapidly improves from ‘relatively modest’ to superhuman level (usually imagined to be as a result of recursive self-improvement).</p>
 +</HTML>
 +
 +
 +==== M ====
 +
 +
 +=== Macrostructural development accelerator ===
 +
 +
 +<HTML>
 +<p>An imagined lever used in thought experiments which slows the large scale features of history (e.g. technological change, geopolitical dynamics) while leaving the small scale features the same.</p>
 +</HTML>
 +
 +
 +=== Mind crime ===
 +
 +
 +<HTML>
 +<p>The mistreatment of morally relevant computations.</p>
 +</HTML>
 +
 +
 +=== Moore’s Law ===
 +
 +
 +<HTML>
 +<p>Any of several different consistent, many-decade patterns of exponential improvement that have been observed in digital technologies. The classic version concerns the number of transistors in a dense integrated circuit, which was observed to be doubling around every year when the ‘law’ was formulated in <a href="https://en.wikipedia.org/wiki/Moore%27s_law">1965</a>. <a href="/doku.php?id=featured_articles:glossary_of_ai_risk_terminology_and_common_ai_terms#Price-Performance_Moores_Law">Price-Performance Moore’s Law</a> is often relevant to AI forecasting.</p>
 +</HTML>
 +
 +
 +=== Moral rightness (MR) AI ===
 +
 +
 +<HTML>
 +<p>An AI which seeks to do what is morally right.</p>
 +</HTML>
 +
 +
 +=== Motivational scaffolding ===
 +
 +
 +<HTML>
 +<p>A hypothesized approach to value learning in which the seed AI is given simple goals, and these goals are replaced with more complex ones once it has developed sufficiently sophisticated representational structure (Bostrom 2014, p191-192).</p>
 +</HTML>
 +
 +
 +=== Multipolar outcome ===
 +
 +
 +<HTML>
 +<p>A situation after the arrival of superintelligence in which no single agent controls most of the resources.</p>
 +</HTML>
 +
 +
 +==== O ====
 +
 +
 +=== Optimization power ===
 +
 +
 +<HTML>
 +<p>The strength of a process’s ability to improve systems.</p>
 +</HTML>
 +
 +
 +=== Oracle ===
 +
 +
 +<HTML>
 +<p>An AI that only answers questions (Bostrom 2014, p145).</p>
 +</HTML>
 +
 +
 +=== Orthogonality thesis ===
 +
 +
 +<HTML>
 +<p>Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.</p>
 +</HTML>
 +
 +
 +==== P ====
 +
 +
 +=== Person-affecting perspective ===
 +
 +
 +<HTML>
 +<p>The view that one should act in the best interests of everyone who already exists, or who will exist independent of one’s choices (see Impersonal perspective).</p>
 +</HTML>
 +
 +
 +=== Perverse instantiation ===
 +
 +
 +<HTML>
 +<p>A solution to a posed goal (eg, make humans smile) that is destructive in unforeseen ways (eg, paralyzing face muscles in the smiling position).</p>
 +</HTML>
 +
 +
 +=== Price-Performance Moore’s Law ===
 +
 +
 +<HTML>
 +<p>The <a href="/doku.php?id=ai_timelines:trends_in_the_cost_of_computing">observed pattern</a> of relatively consistent, long term, exponential price decline for computation.</p>
 +</HTML>
 +
 +
 +=== Principle of differential technological development ===
 +
 +
 +<HTML>
 +<p>“Retard the development of dangerous and harmful technologies, especially the ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risk posed by nature or by other technologies” (Bostrom 2014, p230).</p>
 +</HTML>
 +
 +
 +=== Principle of epistemic deference ===
 +
 +
 +<HTML>
 +<p>“A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than our to be true.  We should therefore defer to the superintelligence’s position whenever feasible” (Bostrom 2014, p226).</p>
 +</HTML>
 +
 +
 +==== Q ====
 +
 +
 +=== Quality superintelligence ===
 +
 +
 +<HTML>
 +<p>“A system that is at least as fast as a human mind and vastly qualitatively smarter” (Bostrom 2014, p56).</p>
 +</HTML>
 +
 +
 +==== R ====
 +
 +
 +=== Recalcitrance ===
 +
 +
 +<HTML>
 +<p>How difficult a system is to improve.</p>
 +</HTML>
 +
 +
 +=== Recursive self-improvement ===
 +
 +
 +<HTML>
 +<p>The envisaged process of AI (perhaps a seed AI) iteratively improving itself.</p>
 +</HTML>
 +
 +
 +=== Reinforcement learning approach to value learning ===
 +
 +
 +<HTML>
 +<p>A hypothesized approach to value learning in which the AI is rewarded for behaviors that more closely approximate human values (Bostrom 2014, p188-9).</p>
 +</HTML>
 +
 +
 +==== S ====
 +
 +
 +=== Second principal-agent problem ===
 +
 +
 +<HTML>
 +<p>The emerging problem of a developer wanting their AI to fulfill their wishes.</p>
 +</HTML>
 +
 +
 +=== Seed AI ===
 +
 +
 +<HTML>
 +<p>A modest AI which can bootstrap into an impressive AI by improving its own architecture.</p>
 +</HTML>
 +
 +
 +=== Singleton ===
 +
 +
 +<HTML>
 +<p>An agent that is internally coordinated and has no opponents.</p>
 +</HTML>
 +
 +
 +=== Sovereign ===
 +
 +
 +<HTML>
 +<p>An AI that acts autonomously in the world, in pursuit of potentially long range objectives (Bostrom 2014, p148).</p>
 +</HTML>
 +
 +
 +=== Speed superintelligence ===
 +
 +
 +<HTML>
 +<p>“A system that can do all that a human intellect can do, but much faster” (Bostrom 2014, p53).</p>
 +</HTML>
 +
 +
 +=== State risk ===
 +
 +
 +<HTML>
 +<p>A risk that comes from being in a certain state, such that the amount of risk is a function of the time spent there. For example, the state of not having the technology to defend from asteroid impacts carries risk proportional to the time we spend in it.</p>
 +</HTML>
 +
 +
 +=== Step risk ===
 +
 +
 +<HTML>
 +<p>A risk that comes from making a transition. Here the amount of risk is not a simple function of how long the transition takes.  For example, traversing a minefield is not safer if done more quickly.</p>
 +</HTML>
 +
 +
 +=== Stunting ===
 +
 +
 +<HTML>
 +<p>A control method that consists of limiting the AI’s capabilities, for instance as by limiting the AI’s access to information (Bostrom 2014, p135).</p>
 +</HTML>
 +
 +
 +=== Superintelligence ===
 +
 +
 +<HTML>
 +<p>Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Bostrom 2014, p22).</p>
 +</HTML>
 +
 +
 +==== T ====
 +
 +
 +=== Takeoff ===
 +
 +
 +<HTML>
 +<p>The event of the emergence of a superintelligence, often characterized by its speed: ‘slow takeoff’ takes decades or centuries, ‘moderate takeoff’ takes months or years and ‘fast takeoff’ takes minutes to days.</p>
 +</HTML>
 +
 +
 +=== Technological completion conjecture ===
 +
 +
 +<HTML>
 +<p>If scientific and technological development efforts do not cease, then all important basic capabilities that could be obtained through some possible technology will be obtained (Bostrom 2014, p127).</p>
 +</HTML>
 +
 +
 +=== Technology coupling ===
 +
 +
 +<HTML>
 +<p>A predictable timing relationship between two technologies, such that hastening of the first technology will hasten the second, either because the second is a precursor or because it is a natural consequence (Bostrom 2014, p236-8) e.g. brain emulation is plausibly coupled to ‘neuromorphic’ AI, because the understanding required to emulate a brain might allow one to more quickly create an AI on similar principles.</p>
 +</HTML>
 +
 +
 +=== Tool AI ===
 +
 +
 +<HTML>
 +<p>An AI that is not ‘like an agent’, but like a more flexible and capable version of contemporary software. Most notably perhaps, it is not goal-directed (Bostrom 2014, p151).</p>
 +</HTML>
 +
 +
 +==== U ====
 +
 +
 +=== Utility function ===
 +
 +
 +<HTML>
 +<p>A mapping from states of the world to real numbers (‘utilities’), describing an entity’s degree of preference for different states of the world. Given the choice between two lotteries, the entity prefers the lottery with the highest ‘expected utility’, which is to say, sum of utilities of possible states weighted by the probability of those states occurring.</p>
 +</HTML>
 +
 +
 +==== V ====
 +
 +
 +=== Value learning ===
 +
 +
 +<HTML>
 +<p>An approach to the value loading problem in which the AI learns the values that humans want it to pursue (Bostrom 2014, p207).</p>
 +</HTML>
 +
 +
 +=== Value loading problem ===
 +
 +
 +<HTML>
 +<p>The problem of causing the AI to pursue human values (Bostrom 2014, p185).</p>
 +</HTML>
 +
 +
 +==== W ====
 +
 +
 +=== Wise-Singleton Sustainability Threshold ===
 +
 +
 +<HTML>
 +<p>A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it face no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe (Bostrom 2014, p100).</p>
 +</HTML>
 +
 +
 +=== Whole-brain emulation ===
 +
 +
 +<HTML>
 +<p>Machine intelligence created by copying the computational structure of the human brain.</p>
 +</HTML>
 +
 +
 +=== Word embedding ===
 +
 +
 +<HTML>
 +<p>A mapping of words to high-dimensional vectors that has been trained to be useful in a word task such that the arrangement of words in the vector space is meaningful. For instance, words near one other in the vector-space are related, and similar relationships between different pairs of words correspond to similar vectors between them, so that e.g. if E(x) is the vector for the word ‘x’, then E(king) – E(queen) ≈ E(woman) – E(man). Word embeddings are explained in more detail <a href="https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/">here</a>.</p>
 +</HTML>
 +
 +
 +===== Notes =====
 +
 +
 +<HTML>
 +<ol class="easy-footnotes-wrapper">
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-1-358"></span>Bostrom, Nick. <em>Superintelligence: Paths, Dangers, Strategies</em>. 1st edition. Oxford: Oxford University Press, 2014.<a class="easy-footnote-to-top" href="#easy-footnote-1-358"></a>
 +</div></li>
 +<li><div class="li">
 +<span class="easy-footnote-margin-adjust" id="easy-footnote-bottom-2-358"></span>Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. <a href="http://neuralnetworksanddeeplearning.com/">http://neuralnetworksanddeeplearning.com</a>.<a class="easy-footnote-to-top" href="#easy-footnote-2-358"></a>
 +</div></li>
 +</ol>
 +</HTML>
 +
 +
  
featured_articles/glossary_of_ai_risk_terminology_and_common_ai_terms.txt · Last modified: 2022/09/21 07:37 (external edit)