User Tools

Site Tools


Stuart Russell’s description of AI risk

Published 11 September, 2017; last updated 28 May, 2020

Stuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly.


Stuart Russell describes a risk from highly advanced AI here. In short:

The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:

1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.

arguments_for_ai_risk/stuart_russells_description_of_ai_risk.txt · Last modified: 2022/09/21 07:37 (external edit)