User Tools

Site Tools


arguments_for_ai_risk:stuart_russells_description_of_ai_risk

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

arguments_for_ai_risk:stuart_russells_description_of_ai_risk [2022/09/21 07:37] (current)
Line 1: Line 1:
 +====== Stuart Russell’s description of AI risk ======
 +
 +// Published 11 September, 2017; last updated 28 May, 2020 //
 +
 +<HTML>
 +<p>Stuart Russell has argued that advanced AI poses a risk, because it will have the ability to make high quality decisions, yet may not share human values perfectly.</p>
 +</HTML>
 +
 +
 +===== Details =====
 +
 +
 +<HTML>
 +<p>Stuart Russell describes a risk from highly advanced AI <a href="https://www.edge.org/conversation/the-myth-of-ai#26015">here</a>. In short:</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p style="padding-left: 30px;">The primary concern [with highly advanced AI] is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken […]. Now we have a problem:</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p style="padding-left: 60px;">1. The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p style="padding-left: 60px;">2. Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.</p>
 +</HTML>
 +
 +
 +<HTML>
 +<p style="padding-left: 30px;">A system that is optimizing a function of n variables, where the objective depends on a subset of size k&lt;n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want.</p>
 +</HTML>
 +
 +
  
arguments_for_ai_risk/stuart_russells_description_of_ai_risk.txt · Last modified: 2022/09/21 07:37 (external edit)