<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki nature_of_ai</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-05-17T15:26:28+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>Do neural networks learn human concepts?</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/nature_of_ai/do_neural_networks_learn_human_concepts?rev=1670459274&amp;do=diff"/>
        <published>2022-12-08T00:27:54+00:00</published>
        <updated>2022-12-08T00:27:54+00:00</updated>
        <id>https://wiki.aiimpacts.org/nature_of_ai/do_neural_networks_learn_human_concepts?rev=1670459274&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="nature_of_ai" />
        <content>&lt;pre&gt;
@@ -1,52 +1,26 @@
  ====== Do neural networks learn human concepts? ======
  
- // Published 06 December, 2021 //
+ //Published 06 December, 2021//
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This page is a stub. It does not necessarily represent much of what is known on the topic.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ //This page is a stub. It does not necessarily represent much of what is known on the topic.
+ //
  
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;Our understanding is that the degree to which neural networks learn concepts that are potentially understandable to humans is an open question.&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ Our understanding is that the degree to which neural networks learn concepts that are potentially understandable to humans is an open question.
  
  
  ===== Details =====
  
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;A very incomplete list of sources on the topic:&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
- 
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;ul&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Acquisition of Chess Knowledge in AlphaZero&amp;lt;/strong&amp;gt; (McGrath et al, 2021)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-3067&amp;quot; title=&amp;#039;McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero.” &amp;amp;lt;em&amp;amp;gt;ArXiv:2111.09259 [Cs, Stat]&amp;amp;lt;/em&amp;amp;gt;, November 27, 2021. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/2111.09259&amp;quot;&amp;amp;gt;http://arxiv.org/abs/2111.09259&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/strong&amp;gt;From the paper: ‘…In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network….’&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Zoom in: An Introduction to Circuits&amp;lt;/strong&amp;gt; (Olah et al, 2020)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-3067&amp;quot; title=&amp;#039;Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” &amp;amp;lt;em&amp;amp;gt;Distill&amp;amp;lt;/em&amp;amp;gt; 5, no. 3 (March 10, 2020): e00024.001. &amp;amp;lt;a href=&amp;quot;https://doi.org/10.23915/distill.00024.001&amp;quot;&amp;amp;gt;https://doi.org/10.23915/distill.00024.001&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
-                 From the paper: ‘In contrast to the typical picture of neural networks as a black box, we’ve been surprised how approachable the network is on this scale. Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the “circuits” of connections between them seem to be meaningful algorithms corresponding to facts about the world. You can watch a circle detector be assembled from curves. You can see a dog head be assembled from eyes, snout, fur and tongue. You can observe how a car is composed from wheels and windows. You can even find circuits implementing simple logic: cases where the network implements AND, OR or XOR over high-level visual features.’&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ul&amp;gt;
- &amp;lt;/HTML&amp;gt;
- 
- 
- ===== Notes =====
+ A very incomplete list of sources on the topic:
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;Featured image: from Olah, et al., “Zoom In: An Introduction to Circuits”, Distill, 2020., &amp;lt;a href=&amp;quot;https://creativecommons.org/licenses/by/4.0/&amp;quot;&amp;gt;CC-BY 4.0&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+   * **Acquisition of Chess Knowledge in AlphaZero** (McGrath et al, 2021)((McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero. November 27, 2021. [[nature_of_ai:http://arxiv.org/abs/2111.09259|http://arxiv.org/abs/2111.09259]]))
+ From the paper: ‘…In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network….’
  
+   * **Zoom in: An Introduction to Circuits** (Olah et al, 2020)((Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” Distill 5, no. 3. March 10, 2020. [[nature_of_ai:http:https://doi.org/10.23915/distill.00024.001|https://doi.org/10.23915/distill.00024.001]]))
+ From the paper: ‘In contrast to the typical picture of neural networks as a black box, we’ve been surprised how approachable the network is on this scale. Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the “circuits” of connections between them seem to be meaningful algorithms corresponding to facts about the world. You can watch a circle detector be assembled from curves. You can see a dog head be assembled from eyes, snout, fur and tongue. You can observe how a car is composed from wheels and windows. You can even find circuits implementing simple logic: cases where the network implements AND, OR or XOR over high-level visual features.
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero.” &amp;lt;em&amp;gt;ArXiv:2111.09259 [Cs, Stat]&amp;lt;/em&amp;gt;, November 27, 2021. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/2111.09259&amp;quot;&amp;gt;http://arxiv.org/abs/2111.09259&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-3067&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” &amp;lt;em&amp;gt;Distill&amp;lt;/em&amp;gt; 5, no. 3 (March 10, 2020): e00024.001. &amp;lt;a href=&amp;quot;https://doi.org/10.23915/distill.00024.001&amp;quot;&amp;gt;https://doi.org/10.23915/distill.00024.001&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-3067&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ol&amp;gt;
- &amp;lt;/HTML&amp;gt;
  
+   * **Harmonizing the object recognition strategies of deep neural networks with humans** (Fel et al, 2022)((Thomas Fel, Ivan Felipe, Drew Linsley, Thomas Serre. &amp;quot;Harmonizing the object recognition strategies of deep neural networks with humans.&amp;quot; Nov 8, 2022. [[nature_of_ai:https://arxiv.org/abs/2211.04533|https://arxiv.org/abs/2211.04533]]))
+ From the paper: &amp;#039;Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition.&amp;#039;
  
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,52 +1,26 @@
  ====== Do neural networks learn human concepts? ======
  
- // Published 06 December, 2021 //
+ //Published 06 December, 2021//
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This page is a stub. It does not necessarily represent much of what is known on the topic.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ //This page is a stub. It does not necessarily represent much of what is known on the topic.
+ //
  
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;Our understanding is that the degree to which neural networks learn concepts that are potentially understandable to humans is an open question.&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+ Our understanding is that the degree to which neural networks learn concepts that are potentially understandable to humans is an open question.
  
  
  ===== Details =====
  
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;A very incomplete list of sources on the topic:&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
- 
- 
- &amp;lt;HTML&amp;gt;
- &amp;lt;ul&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Acquisition of Chess Knowledge in AlphaZero&amp;lt;/strong&amp;gt; (McGrath et al, 2021)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-3067&amp;quot; title=&amp;#039;McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero.” &amp;amp;lt;em&amp;amp;gt;ArXiv:2111.09259 [Cs, Stat]&amp;amp;lt;/em&amp;amp;gt;, November 27, 2021. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/2111.09259&amp;quot;&amp;amp;gt;http://arxiv.org/abs/2111.09259&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/strong&amp;gt;From the paper: ‘…In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network….’&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Zoom in: An Introduction to Circuits&amp;lt;/strong&amp;gt; (Olah et al, 2020)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-3067&amp;quot; title=&amp;#039;Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” &amp;amp;lt;em&amp;amp;gt;Distill&amp;amp;lt;/em&amp;amp;gt; 5, no. 3 (March 10, 2020): e00024.001. &amp;amp;lt;a href=&amp;quot;https://doi.org/10.23915/distill.00024.001&amp;quot;&amp;amp;gt;https://doi.org/10.23915/distill.00024.001&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
-                 From the paper: ‘In contrast to the typical picture of neural networks as a black box, we’ve been surprised how approachable the network is on this scale. Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the “circuits” of connections between them seem to be meaningful algorithms corresponding to facts about the world. You can watch a circle detector be assembled from curves. You can see a dog head be assembled from eyes, snout, fur and tongue. You can observe how a car is composed from wheels and windows. You can even find circuits implementing simple logic: cases where the network implements AND, OR or XOR over high-level visual features.’&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ul&amp;gt;
- &amp;lt;/HTML&amp;gt;
- 
- 
- ===== Notes =====
+ A very incomplete list of sources on the topic:
  
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;Featured image: from Olah, et al., “Zoom In: An Introduction to Circuits”, Distill, 2020., &amp;lt;a href=&amp;quot;https://creativecommons.org/licenses/by/4.0/&amp;quot;&amp;gt;CC-BY 4.0&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
- &amp;lt;/HTML&amp;gt;
+   * **Acquisition of Chess Knowledge in AlphaZero** (McGrath et al, 2021)((McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero. November 27, 2021. [[nature_of_ai:http://arxiv.org/abs/2111.09259|http://arxiv.org/abs/2111.09259]]))
+ From the paper: ‘…In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess. By probing for a broad range of human chess concepts we show when and where these concepts are represented in the AlphaZero network….’
  
+   * **Zoom in: An Introduction to Circuits** (Olah et al, 2020)((Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” Distill 5, no. 3. March 10, 2020. [[nature_of_ai:http:https://doi.org/10.23915/distill.00024.001|https://doi.org/10.23915/distill.00024.001]]))
+ From the paper: ‘In contrast to the typical picture of neural networks as a black box, we’ve been surprised how approachable the network is on this scale. Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the “circuits” of connections between them seem to be meaningful algorithms corresponding to facts about the world. You can watch a circle detector be assembled from curves. You can see a dog head be assembled from eyes, snout, fur and tongue. You can observe how a car is composed from wheels and windows. You can even find circuits implementing simple logic: cases where the network implements AND, OR or XOR over high-level visual features.
  
- &amp;lt;HTML&amp;gt;
- &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;McGrath, Thomas, Andrei Kapishnikov, Nenad Tomašev, Adam Pearce, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik. “Acquisition of Chess Knowledge in AlphaZero.” &amp;lt;em&amp;gt;ArXiv:2111.09259 [Cs, Stat]&amp;lt;/em&amp;gt;, November 27, 2021. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/2111.09259&amp;quot;&amp;gt;http://arxiv.org/abs/2111.09259&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-3067&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-3067&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Olah, Chris, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. “Zoom In: An Introduction to Circuits.” &amp;lt;em&amp;gt;Distill&amp;lt;/em&amp;gt; 5, no. 3 (March 10, 2020): e00024.001. &amp;lt;a href=&amp;quot;https://doi.org/10.23915/distill.00024.001&amp;quot;&amp;gt;https://doi.org/10.23915/distill.00024.001&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-3067&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
- &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;/ol&amp;gt;
- &amp;lt;/HTML&amp;gt;
  
+   * **Harmonizing the object recognition strategies of deep neural networks with humans** (Fel et al, 2022)((Thomas Fel, Ivan Felipe, Drew Linsley, Thomas Serre. &amp;quot;Harmonizing the object recognition strategies of deep neural networks with humans.&amp;quot; Nov 8, 2022. [[nature_of_ai:https://arxiv.org/abs/2211.04533|https://arxiv.org/abs/2211.04533]]))
+ From the paper: &amp;#039;Across 84 different DNNs trained on ImageNet and three independent datasets measuring the where and the how of human visual strategies for object recognition on those images, we find a systematic trade-off between DNN categorization accuracy and alignment with human visual strategies for object recognition.&amp;#039;
  
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Sources of advantage for digital agents over biological agents</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/nature_of_ai/sources_of_advantage_for_digital_agents_over_biological_agents?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/nature_of_ai/sources_of_advantage_for_digital_agents_over_biological_agents?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="nature_of_ai" />
        <content>&lt;pre&gt;
@@ -1 +1,61 @@
+ ====== Sources of advantage for digital agents over biological agents ======
+ 
+ // Published 04 September, 2016; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Artificial agents should have several advantages over humans.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Details =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The following is an excerpt from &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies&amp;quot;&amp;gt;Superintelligence&amp;lt;/a&amp;gt; (Bostrom, 2014),  reproduced with permission. It outlines ten advantages Bostrom expects digital intelligences to have over human intelligences.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Sources of advantage for digital intelligence&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;Minor changes in brain volume and wiring can have major consequences, as we see when we compare the intellectual and technological achievements of humans with those of other apes. The far greater changes in computing resources and architecture that machine intelligence will enable will probably have consequences that are even more profound. It is difficult, perhaps impossible, for us to form an intuitive sense of the aptitudes of a superintelligence; but we can at least get an inkling of the space of possibilities by looking at some of the advantages open to digital minds. The hardware advantages are easiest to appreciate:&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #ededed;&amp;quot;&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Speed of computational elements.&amp;lt;/em&amp;gt; Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~ 2 GHz).[19] As a consequence, the human brain is forced to rely on massive parallelization and is incapable of rapidly performing any computation that requires a large number of sequential operations.[20] (Anything the brain does in under a second cannot use much more than a hundred sequential operations—perhaps only a few dozen.) Yet many of the most practically important algorithms in programming and computer science are not easily parallelizable. Many cognitive tasks could be performed far more efficiently if the brain’s native support for parallelizable pattern-matching algorithms were complemented by, and integrated with, support for fast sequential processing.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Internal communication speed.&amp;lt;/em&amp;gt; Axons carry action potentials at speeds of 120 m/s or less, whereas electronic processing cores can communicate optically at the speed of light (300,000,000 m/s).[21] The sluggishness of neural signals limits how big a biological brain can be while functioning as a single processing unit. For example, to achieve a round-trip latency of less than 10 ms between any two elements in a system, biological brains must be smaller than 0.11&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;. An electronic system, on the other hand, could be 6.1×1017&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;, about the size of a dwarf planet: eighteen orders of magnitude larger.[22]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Number of computational elements.&amp;lt;/em&amp;gt; The human brain has somewhat fewer than 100 billion neurons.[23] Humans have about three and a half times the brain size of chim- panzees (though only one-fifth the brain size of sperm whales).[24] The number of neurons in a biological creature is most obviously limited by cranial volume and metabolic constraints, but other factors may also be significant for larger brains (such as cooling, development time, and signal-conductance delays—see the previous point). By contrast, computer hardware is indefinitely scalable up to very high physical limits.[25] Supercomputers can be warehouse-sized or larger, with additional remote capacity added via high-speed cables.[26]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Storage capacity.&amp;lt;/em&amp;gt; Human working memory is able to hold no more than some four or five chunks of information at any given time.[27] While it would be misleading to compare the size of human working memory directly with the amount of RAM in a digital computer, it is clear that the hardware advantages of digital intelligences will make it possible for them to have larger working memories. This might enable such minds to intuitively grasp complex relationships that humans can only fumblingly handle via plodding calculation.[28] Human long-term memory is also limited, though it is unclear whether we manage to exhaust its storage capacity during the course of an ordinary lifetime—the rate at which we accumulate information is so slow. (On one estimate, the adult human brain stores about one billion bits—a couple of orders of magnitude less than a low-end smartphone.[29]) Both the amount of information stored and the speed with which it can be accessed could thus be vastly greater in a machine brain than in a biological brain.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Reliability, lifespan, sensors, etc.&amp;lt;/em&amp;gt; Machine intelligences might have various other hardware advantages. For example, biological neurons are less reliable than transistors.[30] Since noisy computing necessitates redundant encoding schemes that use multiple elements to encode a single bit of information, a digital brain might derive some efficiency gains from the use of reliable high-precision computing elements. Brains become fatigued after a few hours of work and start to permanently decay after a few decades of subjective time; microprocessors are not subject to these limitations. Data flow into a machine intelligence could be increased by adding millions of sensors. Depending on the technology used, a machine might have reconfigurable hardware that can be optimized for changing task requirements, whereas much of the brain’s architecture is fixed from birth or only slowly changeable (though the details of synaptic connectivity can change over shorter timescales, like days).[31]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;p&amp;gt;At present, the computational power of the biological brain still compares favorably with that of digital computers, though top-of-the-line supercomputers are attaining levels of performance that are within the range of plausible estimates of the brain’s processing power.[32] But hardware is rapidly improving, and the ultimate limits of hardware performance are vastly higher than those of biological computing substrates.&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;Digital minds will also benefit from major advantages in software:&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #ededed;&amp;quot;&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Editability.&amp;lt;/em&amp;gt; It is easier to experiment with parameter variations in software than in neural wetware. For example, with a whole brain emulation one could easily trial what happens if one adds more neurons in a particular cortical area or if one increases or decreases their excitability. Running such experiments in living biological brains would be far more difficult.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Duplicability.&amp;lt;/em&amp;gt; With software, one can quickly make arbitrarily many high-fidelity copies to fill the available hardware base. Biological brains, by contrast, can be reproduced only very slowly; and each new instance starts out in a helpless state, remembering nothing of what its parents learned in their lifetimes.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Goal coordination.&amp;lt;/em&amp;gt; Human collectives are replete with inefficiencies arising from the fact that it is nearly impossible to achieve complete uniformity of purpose among the members of a large group—at least until it becomes feasible to induce docility on a large scale by means of drugs or genetic selection. A “copy clan” (a group of identical or almost identical programs sharing a common goal) would avoid such coordination problems.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Memory sharing.&amp;lt;/em&amp;gt; Biological brains need extended periods of training and mentorship whereas digital minds could acquire new memories and skills by swapping data files. A population of a billion copies of an AI program could synchronize their databases periodically, so that all the instances of the program know everything that any in- stance learned during the previous hour. (Direct memory transfer requires standardized representational formats. Easy swapping of high-level cognitive content would therefore not be possible between just any pair of machine intelligences. In particular, it would not be possible among first-generation whole brain emulations.)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;New modules, modalities, and algorithms.&amp;lt;/em&amp;gt; Visual perception seems to us easy and effortless, quite unlike solving textbook geometry problems—this despite the fact that it takes a massive amount of computation to reconstruct, from the two- dimensional patterns of stimulation on our retinas, a three-dimensional representation of a world populated with recognizable objects. The reason this seems easy is that we have dedicated low-level neural machinery for processing visual information. This low-level processing occurs unconsciously and automatically, without draining our mental energy or conscious attention. Music perception, language use, social cognition, and other forms of information processing that are “natural” for us humans seem to be likewise supported by dedicated neurocomputational modules. An artificial mind that had such specialized support for other cognitive domains that have become important in the contemporary world—such as engineering, computer programming, and business strategy—would have big advantages over minds like ours that have to rely on clunky general-purpose cognition to think about such things. New algorithms may also be developed to take advantage of the distinct affordances of digital hardware, such as its support for fast serial processing.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;p&amp;gt; &amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,61 @@
+ ====== Sources of advantage for digital agents over biological agents ======
+ 
+ // Published 04 September, 2016; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Artificial agents should have several advantages over humans.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Details =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The following is an excerpt from &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies&amp;quot;&amp;gt;Superintelligence&amp;lt;/a&amp;gt; (Bostrom, 2014),  reproduced with permission. It outlines ten advantages Bostrom expects digital intelligences to have over human intelligences.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Sources of advantage for digital intelligence&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;Minor changes in brain volume and wiring can have major consequences, as we see when we compare the intellectual and technological achievements of humans with those of other apes. The far greater changes in computing resources and architecture that machine intelligence will enable will probably have consequences that are even more profound. It is difficult, perhaps impossible, for us to form an intuitive sense of the aptitudes of a superintelligence; but we can at least get an inkling of the space of possibilities by looking at some of the advantages open to digital minds. The hardware advantages are easiest to appreciate:&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #ededed;&amp;quot;&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Speed of computational elements.&amp;lt;/em&amp;gt; Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~ 2 GHz).[19] As a consequence, the human brain is forced to rely on massive parallelization and is incapable of rapidly performing any computation that requires a large number of sequential operations.[20] (Anything the brain does in under a second cannot use much more than a hundred sequential operations—perhaps only a few dozen.) Yet many of the most practically important algorithms in programming and computer science are not easily parallelizable. Many cognitive tasks could be performed far more efficiently if the brain’s native support for parallelizable pattern-matching algorithms were complemented by, and integrated with, support for fast sequential processing.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Internal communication speed.&amp;lt;/em&amp;gt; Axons carry action potentials at speeds of 120 m/s or less, whereas electronic processing cores can communicate optically at the speed of light (300,000,000 m/s).[21] The sluggishness of neural signals limits how big a biological brain can be while functioning as a single processing unit. For example, to achieve a round-trip latency of less than 10 ms between any two elements in a system, biological brains must be smaller than 0.11&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;. An electronic system, on the other hand, could be 6.1×1017&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;, about the size of a dwarf planet: eighteen orders of magnitude larger.[22]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Number of computational elements.&amp;lt;/em&amp;gt; The human brain has somewhat fewer than 100 billion neurons.[23] Humans have about three and a half times the brain size of chim- panzees (though only one-fifth the brain size of sperm whales).[24] The number of neurons in a biological creature is most obviously limited by cranial volume and metabolic constraints, but other factors may also be significant for larger brains (such as cooling, development time, and signal-conductance delays—see the previous point). By contrast, computer hardware is indefinitely scalable up to very high physical limits.[25] Supercomputers can be warehouse-sized or larger, with additional remote capacity added via high-speed cables.[26]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Storage capacity.&amp;lt;/em&amp;gt; Human working memory is able to hold no more than some four or five chunks of information at any given time.[27] While it would be misleading to compare the size of human working memory directly with the amount of RAM in a digital computer, it is clear that the hardware advantages of digital intelligences will make it possible for them to have larger working memories. This might enable such minds to intuitively grasp complex relationships that humans can only fumblingly handle via plodding calculation.[28] Human long-term memory is also limited, though it is unclear whether we manage to exhaust its storage capacity during the course of an ordinary lifetime—the rate at which we accumulate information is so slow. (On one estimate, the adult human brain stores about one billion bits—a couple of orders of magnitude less than a low-end smartphone.[29]) Both the amount of information stored and the speed with which it can be accessed could thus be vastly greater in a machine brain than in a biological brain.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Reliability, lifespan, sensors, etc.&amp;lt;/em&amp;gt; Machine intelligences might have various other hardware advantages. For example, biological neurons are less reliable than transistors.[30] Since noisy computing necessitates redundant encoding schemes that use multiple elements to encode a single bit of information, a digital brain might derive some efficiency gains from the use of reliable high-precision computing elements. Brains become fatigued after a few hours of work and start to permanently decay after a few decades of subjective time; microprocessors are not subject to these limitations. Data flow into a machine intelligence could be increased by adding millions of sensors. Depending on the technology used, a machine might have reconfigurable hardware that can be optimized for changing task requirements, whereas much of the brain’s architecture is fixed from birth or only slowly changeable (though the details of synaptic connectivity can change over shorter timescales, like days).[31]&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;p&amp;gt;At present, the computational power of the biological brain still compares favorably with that of digital computers, though top-of-the-line supercomputers are attaining levels of performance that are within the range of plausible estimates of the brain’s processing power.[32] But hardware is rapidly improving, and the ultimate limits of hardware performance are vastly higher than those of biological computing substrates.&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;Digital minds will also benefit from major advantages in software:&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #ededed;&amp;quot;&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Editability.&amp;lt;/em&amp;gt; It is easier to experiment with parameter variations in software than in neural wetware. For example, with a whole brain emulation one could easily trial what happens if one adds more neurons in a particular cortical area or if one increases or decreases their excitability. Running such experiments in living biological brains would be far more difficult.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Duplicability.&amp;lt;/em&amp;gt; With software, one can quickly make arbitrarily many high-fidelity copies to fill the available hardware base. Biological brains, by contrast, can be reproduced only very slowly; and each new instance starts out in a helpless state, remembering nothing of what its parents learned in their lifetimes.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Goal coordination.&amp;lt;/em&amp;gt; Human collectives are replete with inefficiencies arising from the fact that it is nearly impossible to achieve complete uniformity of purpose among the members of a large group—at least until it becomes feasible to induce docility on a large scale by means of drugs or genetic selection. A “copy clan” (a group of identical or almost identical programs sharing a common goal) would avoid such coordination problems.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;Memory sharing.&amp;lt;/em&amp;gt; Biological brains need extended periods of training and mentorship whereas digital minds could acquire new memories and skills by swapping data files. A population of a billion copies of an AI program could synchronize their databases periodically, so that all the instances of the program know everything that any in- stance learned during the previous hour. (Direct memory transfer requires standardized representational formats. Easy swapping of high-level cognitive content would therefore not be possible between just any pair of machine intelligences. In particular, it would not be possible among first-generation whole brain emulations.)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;New modules, modalities, and algorithms.&amp;lt;/em&amp;gt; Visual perception seems to us easy and effortless, quite unlike solving textbook geometry problems—this despite the fact that it takes a massive amount of computation to reconstruct, from the two- dimensional patterns of stimulation on our retinas, a three-dimensional representation of a world populated with recognizable objects. The reason this seems easy is that we have dedicated low-level neural machinery for processing visual information. This low-level processing occurs unconsciously and automatically, without draining our mental energy or conscious attention. Music perception, language use, social cognition, and other forms of information processing that are “natural” for us humans seem to be likewise supported by dedicated neurocomputational modules. An artificial mind that had such specialized support for other cognitive domains that have become important in the contemporary world—such as engineering, computer programming, and business strategy—would have big advantages over minds like ours that have to rely on clunky general-purpose cognition to think about such things. New algorithms may also be developed to take advantage of the distinct affordances of digital hardware, such as its support for fast serial processing.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;p&amp;gt; &amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
</feed>
