<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki featured_articles</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-04-29T17:44:13+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>AI Impacts research bounties</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/ai_impacts_research_bounties?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/ai_impacts_research_bounties?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,123 @@
+ ====== AI Impacts research bounties ======
+ 
+ // Published 06 August, 2015; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We are offering rewards for several inputs to our research, described below. These offers have no specific deadline except where noted. We may modify them or take them down, but will give at least one week’s notice here unless there is strong reason not to. To submit an entry, email katja@intelligence.org. There is currently a large backlog of entries to check, so new entries will not receive a rapid response.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1. An example of discontinuous technological progress ($50-$500) ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This bounty offer is no longer available after 3 November 2016.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We are interested in finding more examples of large discontinuous technological progress to add to &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:cases_of_discontinuous_technological_progress&amp;quot;&amp;gt;our collection&amp;lt;/a&amp;gt;. We’re offering a bounty of around $50-500 per good example.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We currently know of &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:cases_of_discontinuous_technological_progress&amp;quot;&amp;gt;two good examples&amp;lt;/a&amp;gt; (and one moderate example):&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;strong&amp;gt;Nuclear weapons&amp;lt;/strong&amp;gt; discontinuously increased the &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Relative_effectiveness_factor&amp;quot;&amp;gt;relative effectiveness&amp;lt;/a&amp;gt; of explosives.
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;High temperature superconductors &amp;lt;/strong&amp;gt;led to a dramatic increase in the highest temperature at which superconducting was possible.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;To assess discontinuity, we’ve been using “number of years worth of progress at past rates”, as measured by any relevant metric of technological progress. For example, the discovery of nuclear weapons was equal to about 6,000 years worth of previous progress in the relative effectiveness of explosives. However, we are also interested in examples that seem intuitively discontinuous, even if they don’t exactly fit the criteria of being a large number of year’s progress in one go.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Things that make examples better:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Size:&amp;lt;/strong&amp;gt; Better examples represent larger changes. More than 20 times normal annual progress is ideal.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Sharpness:&amp;lt;/strong&amp;gt; Better examples happened over shorter periods. Over less than a year is ideal.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Breadth:&amp;lt;/strong&amp;gt; Metrics that measure larger categories of things are better. For example, fast adoption curves for highly specific categories (say a particular version of some software) is much less interesting than fast adoption curves for much broader categories (say a whole category of software).&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Rarity: &amp;lt;/strong&amp;gt;As we receive more examples, the interestingness of each one will tend to decline.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;AI Impacts is willing to pay more for better examples. Basically we will judge how interesting your example is and then reward you based on that. We will accept examples that violate our stated preferences but satisfy the spirit of the bounty. Our guess is that we would pay about $500 for another example as good nuclear weapons.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;How to enter:&amp;lt;/strong&amp;gt; all that is necessary to submit an example is to email us a paragraph describing the example, along with sources to verify your claims (such sources are likely to involve at least one time series of success on a particular metric). Note that an example should be of the form ‘A caused abrupt progress in metric B’. For instance, ‘The boliolicopter caused abrupt progress in the maximum rate of fermblangling at sub-freezing temperatures’.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 2. An example of early action on a risk ($20-$100) ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This bounty offer is no longer available after 3 November 2016.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;We want:&amp;lt;/strong&amp;gt; a one sentence description of a case where at least one person acted to avert a risk that was least fifteen years away, along with a link or citation supporting the claim that the action preceded the risk by at least fifteen years. &amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;We will give:&amp;lt;/strong&amp;gt; up $100, with higher sums for examples that are better according to our judgment (see criteria for betterness below), and which we don’t already know about. We might go over $100 for exceptionally good examples.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Further details&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Examples are better if:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The risk is more novel:&amp;lt;/strong&amp;gt; relatively similar problems have not arisen before, and would probably not arise sooner than fifteen years in the future. &amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;e.g. Poverty in retirement is a risk people often prepare for more than fifteen year before it befalls them, however it is not very novel because other people already face an essentially identical risk, and have done many times before. &amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The solution is more specific:&amp;lt;/strong&amp;gt; the action taken would not be nearly as useful if the risk disappeared.&amp;lt;/span&amp;gt; &amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;e.g. Saving money to escape is a reasonable solution to expecting your country to face civil war soon. However saving money is fairly useful in any case, so this solution is not very specific.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;We haven’t received a lot of examples:&amp;lt;/strong&amp;gt; as we collect more examples, the value of each one will tend to decline.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Some examples:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Leo Szilard’s secret nuclear patent&amp;lt;/strong&amp;gt;: the threat of nuclear weapons was quite novel. It’s unclear when Szilard expected such weapons, but quite plausibly at least fifteen years later in 1934. The secret patent does not seem broadly useful, though useful for encouraging more local nuclear research, which is somewhat more broadly useful than secrecy per se. More details in &amp;lt;a href=&amp;quot;https://intelligence.org/files/SzilardNuclearWeapons.pdf&amp;quot;&amp;gt;this report&amp;lt;/a&amp;gt;. This is a reasonably good example.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The Asilomar Conference on recombinant DNA&amp;lt;/strong&amp;gt;: the risk of was arguably quite novel (genetically engineered pandemics), and the solution was reasonably specific (safety rules for dealing with recombinant DNA). However the risks people were concerned about were immediate, rather than decades hence. More details &amp;lt;span class=&amp;quot;contentBold&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;. This is not a good example.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Evidence that the example is better in the above ways is also welcome, though we reserve the right not to explore it fully.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,123 @@
+ ====== AI Impacts research bounties ======
+ 
+ // Published 06 August, 2015; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We are offering rewards for several inputs to our research, described below. These offers have no specific deadline except where noted. We may modify them or take them down, but will give at least one week’s notice here unless there is strong reason not to. To submit an entry, email katja@intelligence.org. There is currently a large backlog of entries to check, so new entries will not receive a rapid response.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1. An example of discontinuous technological progress ($50-$500) ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This bounty offer is no longer available after 3 November 2016.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We are interested in finding more examples of large discontinuous technological progress to add to &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:cases_of_discontinuous_technological_progress&amp;quot;&amp;gt;our collection&amp;lt;/a&amp;gt;. We’re offering a bounty of around $50-500 per good example.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We currently know of &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:cases_of_discontinuous_technological_progress&amp;quot;&amp;gt;two good examples&amp;lt;/a&amp;gt; (and one moderate example):&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;strong&amp;gt;Nuclear weapons&amp;lt;/strong&amp;gt; discontinuously increased the &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Relative_effectiveness_factor&amp;quot;&amp;gt;relative effectiveness&amp;lt;/a&amp;gt; of explosives.
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;High temperature superconductors &amp;lt;/strong&amp;gt;led to a dramatic increase in the highest temperature at which superconducting was possible.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;To assess discontinuity, we’ve been using “number of years worth of progress at past rates”, as measured by any relevant metric of technological progress. For example, the discovery of nuclear weapons was equal to about 6,000 years worth of previous progress in the relative effectiveness of explosives. However, we are also interested in examples that seem intuitively discontinuous, even if they don’t exactly fit the criteria of being a large number of year’s progress in one go.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Things that make examples better:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Size:&amp;lt;/strong&amp;gt; Better examples represent larger changes. More than 20 times normal annual progress is ideal.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Sharpness:&amp;lt;/strong&amp;gt; Better examples happened over shorter periods. Over less than a year is ideal.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Breadth:&amp;lt;/strong&amp;gt; Metrics that measure larger categories of things are better. For example, fast adoption curves for highly specific categories (say a particular version of some software) is much less interesting than fast adoption curves for much broader categories (say a whole category of software).&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Rarity: &amp;lt;/strong&amp;gt;As we receive more examples, the interestingness of each one will tend to decline.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;AI Impacts is willing to pay more for better examples. Basically we will judge how interesting your example is and then reward you based on that. We will accept examples that violate our stated preferences but satisfy the spirit of the bounty. Our guess is that we would pay about $500 for another example as good nuclear weapons.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;How to enter:&amp;lt;/strong&amp;gt; all that is necessary to submit an example is to email us a paragraph describing the example, along with sources to verify your claims (such sources are likely to involve at least one time series of success on a particular metric). Note that an example should be of the form ‘A caused abrupt progress in metric B’. For instance, ‘The boliolicopter caused abrupt progress in the maximum rate of fermblangling at sub-freezing temperatures’.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 2. An example of early action on a risk ($20-$100) ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;This bounty offer is no longer available after 3 November 2016.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;We want:&amp;lt;/strong&amp;gt; a one sentence description of a case where at least one person acted to avert a risk that was least fifteen years away, along with a link or citation supporting the claim that the action preceded the risk by at least fifteen years. &amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;We will give:&amp;lt;/strong&amp;gt; up $100, with higher sums for examples that are better according to our judgment (see criteria for betterness below), and which we don’t already know about. We might go over $100 for exceptionally good examples.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Further details&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Examples are better if:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The risk is more novel:&amp;lt;/strong&amp;gt; relatively similar problems have not arisen before, and would probably not arise sooner than fifteen years in the future. &amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;e.g. Poverty in retirement is a risk people often prepare for more than fifteen year before it befalls them, however it is not very novel because other people already face an essentially identical risk, and have done many times before. &amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The solution is more specific:&amp;lt;/strong&amp;gt; the action taken would not be nearly as useful if the risk disappeared.&amp;lt;/span&amp;gt; &amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;e.g. Saving money to escape is a reasonable solution to expecting your country to face civil war soon. However saving money is fairly useful in any case, so this solution is not very specific.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;We haven’t received a lot of examples:&amp;lt;/strong&amp;gt; as we collect more examples, the value of each one will tend to decline.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Some examples:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Leo Szilard’s secret nuclear patent&amp;lt;/strong&amp;gt;: the threat of nuclear weapons was quite novel. It’s unclear when Szilard expected such weapons, but quite plausibly at least fifteen years later in 1934. The secret patent does not seem broadly useful, though useful for encouraging more local nuclear research, which is somewhat more broadly useful than secrecy per se. More details in &amp;lt;a href=&amp;quot;https://intelligence.org/files/SzilardNuclearWeapons.pdf&amp;quot;&amp;gt;this report&amp;lt;/a&amp;gt;. This is a reasonably good example.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;The Asilomar Conference on recombinant DNA&amp;lt;/strong&amp;gt;: the risk of was arguably quite novel (genetically engineered pandemics), and the solution was reasonably specific (safety rules for dealing with recombinant DNA). However the risks people were concerned about were immediate, rather than decades hence. More details &amp;lt;span class=&amp;quot;contentBold&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;https://intelligence.org/2015/06/30/new-report-the-asilomar-conference-a-case-study-in-risk-mitigation/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;. This is not a good example.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span class=&amp;quot;name&amp;quot;&amp;gt;Evidence that the example is better in the above ways is also welcome, though we reserve the right not to explore it fully.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>AI Vignettes Project</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/ai_vignettes_project?rev=1667620355&amp;do=diff"/>
        <published>2022-11-05T03:52:35+00:00</published>
        <updated>2022-11-05T03:52:35+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/ai_vignettes_project?rev=1667620355&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -60,10 +60,5 @@
  
  ==== Vignette collection ====
  
  
- A subset of vignettes arising from this project, or similar, can be found [[https://airtable.com/shr4mHlTIiKtFRDuR/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;viewControls=on|here]].
- 
- 
- 
- 
- 
+ A subset of vignettes arising from this project can be found among [[featured_articles:fiction_relevant_to_ai_futurism|fiction relevant to AI futurism]].

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -60,10 +60,5 @@
  
  ==== Vignette collection ====
  
  
- A subset of vignettes arising from this project, or similar, can be found [[https://airtable.com/shr4mHlTIiKtFRDuR/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;viewControls=on|here]].
- 
- 
- 
- 
- 
+ A subset of vignettes arising from this project can be found among [[featured_articles:fiction_relevant_to_ai_futurism|fiction relevant to AI futurism]].

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Cases of Discontinuous Technological Progress</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/cases_of_discontinuous_technological_progress?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/cases_of_discontinuous_technological_progress?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,87 @@
+ ====== Cases of Discontinuous Technological Progress ======
+ 
+ // Published 31 December, 2014; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We know of ten events which produced a robust discontinuity in progress equivalent to more than one hundred years at previous rates in some interesting metric. We know of 53 other events which produced smaller or less robust discontinuities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Background =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;These cases were researched as part of our &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:discontinuous_progress_investigation&amp;quot;&amp;gt;discontinuous progress investigation&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List of cases =====
+ 
+ 
+ ==== Events causing large, robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The Pyramid of Djoser, 2650BC (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;structure height trends&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The SS &amp;lt;em&amp;gt;Great Eastern&amp;lt;/em&amp;gt;, 1858 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_ship_size&amp;quot;&amp;gt;ship size trends&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first telegraph, 1858 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_message_speed&amp;quot;&amp;gt;speed of sending a 140 character message across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The second telegraph, 1866 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_message_speed&amp;quot;&amp;gt;speed of sending a 140 character message across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The Paris Gun, 1918 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_altitude&amp;quot;&amp;gt;altitude reached by man-made means&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_passenger_travel&amp;quot;&amp;gt;speed of passenger travel across the Atlantic Ocean&amp;lt;/a&amp;gt; and &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_long-range_military_payload_delivery&amp;quot;&amp;gt;speed of military payload travel across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The George Washington Bridge, 1931 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_bridge_span_length&amp;quot;&amp;gt;longest bridge span&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first nuclear weapons, 1945 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:effect_of_nuclear_weapons_on_historic_trends_in_explosives&amp;quot;&amp;gt;relative effectiveness of explosives&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first ICBM, 1958 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_long-range_military_payload_delivery&amp;quot;&amp;gt;average speed of military payload crossing the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;YBa&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;Cu&amp;lt;sub&amp;gt;3&amp;lt;/sub&amp;gt;O&amp;lt;sub&amp;gt;7&amp;lt;/sub&amp;gt; as a superconductor, 1987 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_the_maximum_superconducting_temperature&amp;quot;&amp;gt;warmest temperature of superconduction&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Events causing moderate, robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;HMS Warrior, 1860 (discontinuity in both &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_ship_size&amp;quot;&amp;gt;Royal Navy ship tonnage and Royal Navy ship displacement&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Eiffel Tower, 1889 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;tallest existing freestanding structure height&amp;lt;/a&amp;gt;, and in other height trends non-robustly)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Fairey Delta 2, 1956 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_flight_airspeed_records&amp;quot;&amp;gt;airspeed&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Pellets shot into space, 1957, measured after one day of travel (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_altitude&amp;quot;&amp;gt;altitude achieved by man-made means&amp;lt;/a&amp;gt;)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-202&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-202&amp;quot; title=&amp;quot;This was the first of various altitude records where the object continues to gain distance from Earth’s surface continuously over a long period. One could choose to treat these in different ways, and get different size of discontinuity numbers. Strictly, all altitude increases are continuous, so we are anyway implicitly looking at something like discontinuities in heights reached within some period. We somewhat arbitrarily chose to measure altitudes roughly every year, including one day in for the pellets, the only one where the very start mattered. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Burj Khalifa, 2009 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;height of tallest building ever&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Non-robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1iMIZ57Ka9-ZYednnGeonC-NqwGC7dKiHN9S-TAxfVdQ/edit#gid=1994197408&amp;amp;amp;range=B3:B90&amp;quot;&amp;gt;This spreadsheet&amp;lt;/a&amp;gt; details all discontinuities found, as of April 2020.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-202&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;This was the first of various altitude records where the object continues to gain distance from Earth’s surface continuously over a long period. One could choose to treat these in different ways, and get different size of discontinuity numbers. Strictly, all altitude increases are continuous, so we are anyway implicitly looking at something like discontinuities in heights reached within some period. We somewhat arbitrarily chose to measure altitudes roughly every year, including one day in for the pellets, the only one where the very start mattered. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-202&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,87 @@
+ ====== Cases of Discontinuous Technological Progress ======
+ 
+ // Published 31 December, 2014; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We know of ten events which produced a robust discontinuity in progress equivalent to more than one hundred years at previous rates in some interesting metric. We know of 53 other events which produced smaller or less robust discontinuities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Background =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;These cases were researched as part of our &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:discontinuous_progress_investigation&amp;quot;&amp;gt;discontinuous progress investigation&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List of cases =====
+ 
+ 
+ ==== Events causing large, robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The Pyramid of Djoser, 2650BC (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;structure height trends&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The SS &amp;lt;em&amp;gt;Great Eastern&amp;lt;/em&amp;gt;, 1858 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_ship_size&amp;quot;&amp;gt;ship size trends&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first telegraph, 1858 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_message_speed&amp;quot;&amp;gt;speed of sending a 140 character message across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The second telegraph, 1866 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_message_speed&amp;quot;&amp;gt;speed of sending a 140 character message across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The Paris Gun, 1918 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_altitude&amp;quot;&amp;gt;altitude reached by man-made means&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first non-stop transatlantic flight, in a modified WWI bomber, 1919 (discontinuity in both &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_transatlantic_passenger_travel&amp;quot;&amp;gt;speed of passenger travel across the Atlantic Ocean&amp;lt;/a&amp;gt; and &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_long-range_military_payload_delivery&amp;quot;&amp;gt;speed of military payload travel across the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The George Washington Bridge, 1931 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_bridge_span_length&amp;quot;&amp;gt;longest bridge span&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first nuclear weapons, 1945 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:effect_of_nuclear_weapons_on_historic_trends_in_explosives&amp;quot;&amp;gt;relative effectiveness of explosives&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The first ICBM, 1958 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_long-range_military_payload_delivery&amp;quot;&amp;gt;average speed of military payload crossing the Atlantic Ocean&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;YBa&amp;lt;sub&amp;gt;2&amp;lt;/sub&amp;gt;Cu&amp;lt;sub&amp;gt;3&amp;lt;/sub&amp;gt;O&amp;lt;sub&amp;gt;7&amp;lt;/sub&amp;gt; as a superconductor, 1987 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_the_maximum_superconducting_temperature&amp;quot;&amp;gt;warmest temperature of superconduction&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Events causing moderate, robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;HMS Warrior, 1860 (discontinuity in both &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_ship_size&amp;quot;&amp;gt;Royal Navy ship tonnage and Royal Navy ship displacement&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Eiffel Tower, 1889 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;tallest existing freestanding structure height&amp;lt;/a&amp;gt;, and in other height trends non-robustly)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Fairey Delta 2, 1956 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_flight_airspeed_records&amp;quot;&amp;gt;airspeed&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Pellets shot into space, 1957, measured after one day of travel (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=takeoff_speed:continuity_of_progress:historic_trends_in_altitude&amp;quot;&amp;gt;altitude achieved by man-made means&amp;lt;/a&amp;gt;)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-202&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-202&amp;quot; title=&amp;quot;This was the first of various altitude records where the object continues to gain distance from Earth’s surface continuously over a long period. One could choose to treat these in different ways, and get different size of discontinuity numbers. Strictly, all altitude increases are continuous, so we are anyway implicitly looking at something like discontinuities in heights reached within some period. We somewhat arbitrarily chose to measure altitudes roughly every year, including one day in for the pellets, the only one where the very start mattered. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Burj Khalifa, 2009 (discontinuity in &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:historic_trends_in_structure_heights&amp;quot;&amp;gt;height of tallest building ever&amp;lt;/a&amp;gt;)
+                 &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Non-robust discontinuities ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1iMIZ57Ka9-ZYednnGeonC-NqwGC7dKiHN9S-TAxfVdQ/edit#gid=1994197408&amp;amp;amp;range=B3:B90&amp;quot;&amp;gt;This spreadsheet&amp;lt;/a&amp;gt; details all discontinuities found, as of April 2020.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-202&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;This was the first of various altitude records where the object continues to gain distance from Earth’s surface continuously over a long period. One could choose to treat these in different ways, and get different size of discontinuity numbers. Strictly, all altitude increases are continuous, so we are anyway implicitly looking at something like discontinuities in heights reached within some period. We somewhat arbitrarily chose to measure altitudes roughly every year, including one day in for the pellets, the only one where the very start mattered. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-202&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Evidence against current methods leading to human level artificial intelligence</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/evidence_against_current_methods_leading_to_human_level_artificial_intelligence?rev=1666144919&amp;do=diff"/>
        <published>2022-10-19T02:01:59+00:00</published>
        <updated>2022-10-19T02:01:59+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/evidence_against_current_methods_leading_to_human_level_artificial_intelligence?rev=1666144919&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -10,9 +10,9 @@
  
  ==== Clarifications ====
  
  
- We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.((“It now seems possible that we could build ‘prosaic’ AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about ‘how intelligence works’”— Christiano, Paul. [[https://openreview.net/pdf?id=H18WqugAb|Prosaic AI Alignment]]. 2017. Medium. Accessed August 13 2019. https://ai-alignment.com/prosaic-ai-control-b959644d79c2.))&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We have not precisely defined ‘current methods’. Many of the works we cite refer to currently //dominant// methods such as machine learning (especially deep learning) and reinforcement learning.
+ We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.((“It now seems possible that we could build ‘prosaic’ AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about ‘how intelligence works’”— Christiano, Paul. [[https://openreview.net/pdf?id=H18WqugAb|Prosaic AI Alignment]]. 2017. Medium. Accessed August 13 2019. https://ai-alignment.com/prosaic-ai-control-b959644d79c2.)) We have not precisely defined ‘current methods’. Many of the works we cite refer to currently //dominant// methods such as machine learning (especially deep learning) and reinforcement learning.
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;By human-level AI, we mean AI with a level of &amp;lt;em&amp;gt;performance&amp;lt;/em&amp;gt; comparable to humans. We have in mind the operationalization of ‘high-level machine intelligence’ from our &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai&amp;quot;&amp;gt;2016 expert survey on progress in AI&amp;lt;/a&amp;gt;: “Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.”&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-1938&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-1938&amp;quot; title=&amp;#039;Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. &amp;amp;lt;a href=&amp;quot;https://arxiv.org/abs/1705.08807&amp;quot;&amp;amp;gt;&amp;amp;amp;#8220;When will AI exceed human performance? Evidence from AI experts.&amp;amp;amp;#8221;&amp;amp;lt;/a&amp;amp;gt; Journal of Artificial Intelligence Research 62 (2018): 729-754.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -10,9 +10,9 @@
  
  ==== Clarifications ====
  
  
- We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.((“It now seems possible that we could build ‘prosaic’ AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about ‘how intelligence works’”— Christiano, Paul. [[https://openreview.net/pdf?id=H18WqugAb|Prosaic AI Alignment]]. 2017. Medium. Accessed August 13 2019. https://ai-alignment.com/prosaic-ai-control-b959644d79c2.))&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We have not precisely defined ‘current methods’. Many of the works we cite refer to currently //dominant// methods such as machine learning (especially deep learning) and reinforcement learning.
+ We take ‘current methods’ to mean techniques for engineering artificial intelligence that are already known, involving no “qualitatively new ideas”.((“It now seems possible that we could build ‘prosaic’ AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about ‘how intelligence works’”— Christiano, Paul. [[https://openreview.net/pdf?id=H18WqugAb|Prosaic AI Alignment]]. 2017. Medium. Accessed August 13 2019. https://ai-alignment.com/prosaic-ai-control-b959644d79c2.)) We have not precisely defined ‘current methods’. Many of the works we cite refer to currently //dominant// methods such as machine learning (especially deep learning) and reinforcement learning.
  
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;By human-level AI, we mean AI with a level of &amp;lt;em&amp;gt;performance&amp;lt;/em&amp;gt; comparable to humans. We have in mind the operationalization of ‘high-level machine intelligence’ from our &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai&amp;quot;&amp;gt;2016 expert survey on progress in AI&amp;lt;/a&amp;gt;: “Say we have ‘high-level machine intelligence’ when unaided machines can accomplish every task better and more cheaply than human workers.”&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-1938&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-1938&amp;quot; title=&amp;#039;Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. &amp;amp;lt;a href=&amp;quot;https://arxiv.org/abs/1705.08807&amp;quot;&amp;amp;gt;&amp;amp;amp;#8220;When will AI exceed human performance? Evidence from AI experts.&amp;amp;amp;#8221;&amp;amp;lt;/a&amp;amp;gt; Journal of Artificial Intelligence Research 62 (2018): 729-754.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Evidence on good forecasting practices from the Good Judgment Project</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/evidence_on_good_forecasting_practices_from_the_good_judgment_project?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/evidence_on_good_forecasting_practices_from_the_good_judgment_project?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,443 @@
+ ====== Evidence on good forecasting practices from the Good Judgment Project ======
+ 
+ // Published 07 February, 2019; last updated 17 July, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;According to experience and data from the Good Judgment Project, the following are associated with successful forecasting, in rough decreasing order of combined importance and confidence:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Past performance in the same broad domain&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Making more predictions on the same question&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Deliberation time&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Collaboration on teams&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Intelligence&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Domain expertise&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Having taken a one-hour training module on these topics&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Cognitive reflection’ test scores&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Active open-mindedness’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Aggregation of individual judgments&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Use of precise probabilistic predictions&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Use of ‘the outside view’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Fermi-izing’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Bayesian reasoning’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Practice&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Details =====
+ 
+ 
+ ==== 1. 1. Process ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The Good Judgment Project (GJP) was the winning team in IARPA’s 2011-2015 forecasting tournament. In the tournament, six teams assigned probabilistic answers to hundreds of questions about geopolitical events months to a year in the future. Each competing team used a different method for coming up with their guesses, so the tournament helps us to evaluate different forecasting methods.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The GJP team, led by Philip Tetlock and Barbara Mellers, gathered thousands of online volunteers and had them answer the tournament questions. They then made their official forecasts by aggregating these answers. In the process, the team collected data about the patterns of performance in their volunteers, and experimented with aggregation methods and improvement interventions. For example, they ran an RCT to test the effect of a short training program on forecasting accuracy. They especially focused on identifying and making use of the most successful two percent of forecasters, dubbed ‘superforecasters’.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock’s book&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Superforecasting&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;describes this process and Tetlock’s resulting understanding of how to forecast well.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.2. Correlates of successful forecasting ====
+ 
+ 
+ === 1.2.1. Past performance ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Roughly 70% of the superforecasters maintained their status from one year to the next &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p104 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Across all the forecasters, the correlation between performance in one year and performance in the next year was 0.65 &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p104 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;These high correlations are particularly impressive because the forecasters were online volunteers; presumably substantial variance year-to-year came from forecasters throttling down their engagement due to fatigue or changing life circumstances &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-3-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-3-1283&amp;quot; title=&amp;quot; Technically the forecasters were paid, up to $250 per season. (&amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p72) However their payments did not depend on how accurate they were or how much effort they put in, beyond the minimum.&amp;amp;amp;nbsp;&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ === 1.2.2. Behavioral and dispositional variables ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Table 2  depicts the correlations between measured variables amongst GJP’s volunteers in the first two years of the tournament &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-4-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-4-1283&amp;quot; title=&amp;#039; The table is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;. “Del time” is deliberation time.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; Each is described in more detail below.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The first column shows the relationship between each variable and standardized&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Brier_score&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Brier score&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;, which is a measure of inaccuracy: higher Brier scores mean less accuracy, so negative correlations are good. “Ravens” is an IQ test; “Del time” is deliberation time, and “teams” is whether or not the forecaster was assigned to a team. “Actively open-minded thinking” is an attempt to measure “the tendency to evaluate arguments and evidence without undue bias from one’s own prior beliefs—and with recognition of the fallibility of one’s judgment.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-1283&amp;quot; title=&amp;#039; “Nonetheless, as we saw in the structural model, and confirm here, the best model uses dispositional, situational, and behavioral variables. The combination produced a multiple correlation of .64.” This is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The authors conducted various statistical analyses to explore the relationships between these variables. They computed a structural equation model to predict a forecaster’s accuracy:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Yellow ovals are latent dispositional variables, yellow rectangles are observed dispositional variables, pink rectangles are experimentally manipulated situational variables, and green rectangles are observed behavioral variables. This model has a &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Multiple_correlation&amp;quot;&amp;gt;multiple correlation&amp;lt;/a&amp;gt; of 0.64.&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;As these data indicate, domain knowledge, intelligence, active open-mindedness, and working in teams each contribute substantially to accuracy. We can also conclude that effort helps, because deliberation time and number of predictions made per question (“belief updating”) both improved accuracy. Finally, training also helps. This is especially surprising because the training module lasted only an hour and its effects persisted for at least a year. The module included content about probabilistic reasoning, using the outside view, avoiding biases, and more.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.3. Aggregation algorithms ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;GJP made their official predictions by aggregating and extremizing the predictions of their volunteers.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The aggregation algorithm was elitist, meaning that it weighted more heavily people who were better on various metrics.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-7-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-7-1283&amp;quot; title=&amp;#039; On the &amp;amp;lt;a href=&amp;quot;https://goodjudgment.com/science.html&amp;quot;&amp;amp;gt;webpage&amp;amp;lt;/a&amp;amp;gt;, it says forecasters with better track-records and those who update more frequently get weighted more. In &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;these slides,&amp;amp;amp;nbsp;&amp;amp;lt;/a&amp;amp;gt;Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; The extremizing step pushes the aggregated judgment closer to 1 or 0, to make it more confident. The degree to which they extremize depends on how diverse and sophisticated the pool of forecasters is.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-8-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-8-1283&amp;quot; title=&amp;#039; The academic papers on this topic are &amp;amp;lt;a href=&amp;quot;https://www.sciencedirect.com/science/article/pii/S0169207013001635&amp;quot;&amp;amp;gt;Satopaa et al 2013&amp;amp;lt;/a&amp;amp;gt; and &amp;amp;lt;a href=&amp;quot;http://pubsonline.informs.org/doi/abs/10.1287/deca.2014.0293&amp;quot;&amp;amp;gt;Baron et al 2014&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; Whether extremizing is a good idea is still controversial.  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-9-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-9-1283&amp;quot; title=&amp;quot; According to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, &amp;amp;lt;i&amp;amp;gt;a priori &amp;amp;lt;/i&amp;amp;gt;one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time.&amp;amp;amp;nbsp;&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;9&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;GJP beat all of the other teams.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;They consistently beat the control group—which was a forecast made by averaging ordinary forecasters—by more than 60%.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-10-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-10-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p18. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; They  also beat a prediction market inside the intelligence community—populated by professional analysts with access to classified information—by 25-30%. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;this seminar&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;That said, individual superforecasters did almost as well, so the elitism of the algorithm may account for a lot of its success.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-12-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-12-1283&amp;quot; title=&amp;#039; For example, in year 2 one superforecaster beat the extremizing algorithm. More generally, as discussed in &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;this seminar&amp;amp;lt;/a&amp;amp;gt;, the aggregation algorithm produces the greatest improvement with ordinary forecasters; the superforecasters were good enough that it didn’t help much.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;12&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.4. Outside View ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The forecasters who received training were asked to record, for each prediction, which parts of the training they used to make it. Some parts of the training—e.g. “Post-mortem analysis”—were correlated with inaccuracy, but others—most notably “Comparison classes”—were correlated with accuracy.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-13-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-13-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;http://journal.sjdm.org/16/16511/jdm16511.pdf&amp;quot;&amp;amp;gt;Chang et al 2016&amp;amp;lt;/a&amp;amp;gt;. The average brier score of answers tagged “comparison classes” was 0.17, while the next-best tag averaged 0.26.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;13&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;  ‘Comparison classes’ is another term for&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Reference_class_forecasting&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;reference-class forecasting&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;, also known as ‘the outside view’. It is the method of assigning a probability by straightforward extrapolation from similar past situations and their outcomes.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.5. Tetlock’s “Portrait of the modal superforecaster” ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This subsection and those that follow will lay out some more qualitative results, things that Tetlock recommends on the basis of his research and interviews with superforecasters. Here is Tetlock’s “portrait of the modal superforecaster:” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-14-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-14-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p191 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;14&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Philosophic outlook:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Cautious:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Nothing is certain.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Humble:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Reality is infinitely complex.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Nondeterministic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Whatever happens is not meant to be and does not have to happen.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Abilities &amp;amp;amp; thinking styles:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Actively open-minded:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Beliefs are hypotheses to be tested, not treasures to be protected.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Intelligent and knowledgeable, with a “Need for Cognition”:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Intellectually curious, enjoy puzzles and mental challenges.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Reflective:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Introspective and self-critical&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Numerate:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Comfortable with numbers&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Methods of forecasting:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Pragmatic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Not wedded to any idea or agenda&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Analytical:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Capable of stepping back from the tip-of-your-nose perspective and considering other views&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Dragonfly-eyed:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Value diverse views and synthesize them into their own&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Probabilistic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Judge using many grades of maybe&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Thoughtful updaters:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;When facts change, they change their minds&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Good intuitive psychologists:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Aware of the value of checking thinking for cognitive and emotional biases &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-15-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-15-1283&amp;quot; title=&amp;#039; There is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people. From &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv&amp;quot;&amp;amp;gt;edge.org&amp;amp;lt;/a&amp;amp;gt;: &amp;amp;lt;i&amp;amp;gt;Mellers: &amp;amp;lt;/i&amp;amp;gt;“We have given them lots of Kahneman and Tversky-like problems to see if they fall prey to the same sorts of biases and errors. The answer is sort of, some of them do, but not as many. It’s not nearly as frequent as you see with the rest of us ordinary mortals. The other thing that’s interesting is they don’t make the kinds of mistakes that regular people make instead of the right answer. They do something that’s a little bit more thoughtful. They integrate base rates with case-specific information a little bit more.” &amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;15&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Work ethic:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Growth mindset:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Believe it’s possible to get better&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Grit:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Determined to keep at it however long it takes&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.6. Tetlock’s “Ten Commandments for Aspiring Superforecasters:” ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This advice is given at the end of the book, and may make less sense to someone who hasn’t read the book. A full transcript of these commandments can be found&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://www.lesswrong.com/posts/dvYeSKDRd68GcrWoe/ten-commandments-for-aspiring-superforecasters&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;here&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;; this is a summary:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(1) Triage:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Don’t waste time on questions that are “clocklike” where a rule of thumb can get you pretty close to the correct answer, or “cloudlike” where even fancy models can’t beat a dart-throwing chimp.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(2) Break seemingly intractable problems into tractable sub-problems:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This is how Fermi estimation works. One related piece of advice is “be wary of accidentally substituting an easy question for a hard one,” e.g. substituting “Would Israel be willing to assassinate Yasser Arafat?” for “Will at least one of the tests for polonium in Arafat’s body turn up positive?”&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(3) Strike the right balance between inside and outside views:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;In particular,&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;first&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;anchor with the outside view and&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;then&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;adjust using the inside view.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(4) Strike the right balance between under- and overreacting to evidence:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Usually do many small updates, but occasionally do big updates when the situation calls for it. Remember to think about P(E|H)/P(E|~H); remember to avoid the base-rate fallacy. “Superforecasters aren’t perfect Bayesian predictors but they are much better than most of us.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-16-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-16-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;&amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p281 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;16&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(5) Look for the clashing causal forces at work in each problem:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This is the “dragonfly eye perspective,” which is where you attempt to do a sort of mental wisdom of the crowds: Have tons of different causal models and aggregate their judgments. Use “Devil’s advocate” reasoning. If you think that P, try hard to convince yourself that not-P. You should find yourself saying “On the one hand… on the other hand… on the third hand…” a lot.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(6) Strive to distinguish as many degrees of doubt as the problem permits but no more.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(9) Bring out the best in others and let others bring out the best in you.&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The book spent a whole chapter on this, using the Wehrmacht as an extended case study on good team organization. One pervasive guiding principle is “Don’t tell people how to do things; tell them what you want accomplished, and they’ll surprise you with their ingenuity in doing it.” The other pervasive guiding principle is “Cultivate a culture in which people—even subordinates—are encouraged to dissent and give counterarguments.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-17-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-17-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;See e.g. page 284 of &amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt;, and the entirety of chapter 9. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;17&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(10) Master the error-balancing bicycle:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This one should have been called practice, practice, practice. Tetlock says that reading the news and generating probabilities isn’t enough; you need to actually score your predictions so that you know how wrong you were.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(11) Don’t treat commandments as commandments:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock’s point here is simply that you should use your judgment about whether to follow a commandment or not; sometimes they should be overridden.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.7. Recipe for Making Predictions ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock describes how superforecasters go about making their predictions.&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-18-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-18-1283&amp;quot; title=&amp;quot; See Chapter 5: “Ultimately, it’s not the number crunching power that counts. It’s how you use it. … You’ve Fermi-ized the question, consulted the outside view, and now, finally, you can consult the inside view … So you have an outside view and an inside view. Now they have to be merged. …” &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;18&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; Here is an attempt at a summary:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Sometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Next, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Repeat steps 1 – 3 until you hit diminishing returns.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.8. Bayesian reasoning &amp;amp; precise probabilistic forecasts ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Humans normally express uncertainty with terms like “maybe” and “almost certainly” and “a significant chance.” Tetlock advocates for thinking and speaking in probabilities instead. He recounts many anecdotes of misunderstandings that might have been avoided this way. For example:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;In 1961, when the CIA was planning to topple the Castro government by landing a small army of Cuban expatriates at the Bay of Pigs, President John F. Kennedy turned to the military for an unbiased assessment. The Joint Chiefs of Staff concluded that the plan had a “fair chance” of success. The man who wrote the words “fair chance” later said he had in mind odds of 3 to 1 against success. But Kennedy was never told precisely what “fair chance” meant and, not unreasonably, he took it to be a much more positive assessment. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-19-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-19-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;&amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt; 44 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;19&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This example hints at another advantage of probabilistic judgments: It’s harder to weasel out of them afterwards, and therefore easier to keep score. Keeping score is crucial for getting feedback from reality, which is crucial for building up expertise.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;A standard criticism of using probabilities is that they merely conceal uncertainty rather than quantify it—after all, the numbers you pick are themselves guesses. This may be true for people who haven’t practiced much, but it isn’t true for superforecasters, who are impressively well-calibrated and whose accuracy scores decrease when you round their predictions to the nearest 0.05. (EDIT: This should be 0.1)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-20-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-20-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;The superforecasters had a calibration of 0.01, which means that the average difference between a probability they use and the true frequency of occurrence is 0.01. This is from &amp;amp;lt;a href=&amp;quot;https://www.researchgate.net/publication/277087515_Identifying_and_Cultivating_Superforecasters_as_a_Method_of_Improving_Probabilistic_Predictions&amp;quot;&amp;amp;gt;Mellers et al 2015&amp;amp;lt;/a&amp;amp;gt;. The fact about rounding their predictions is from &amp;amp;lt;a href=&amp;quot;https://academic.oup.com/isq/article-abstract/62/2/410/4944059?redirectedFrom=fulltext&amp;quot;&amp;amp;gt;Friedman et al 2018&amp;amp;lt;/a&amp;amp;gt;. EDIT: Seems I was wrong, thanks to this commenter for noticing.https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;20&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Bayesian reasoning is a natural next step once you are thinking and talking probabilities—it is the theoretical ideal in several important ways &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-21-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-21-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;For an excellent introduction to Bayesian reasoning and its theoretical foundations, see Strevens’ textbook-like &amp;amp;lt;a href=&amp;quot;http://www.strevens.org/bct/&amp;quot;&amp;amp;gt;lecture notes&amp;amp;lt;/a&amp;amp;gt;. Some of the facts summarized in this paragraph about Superforecasters and Bayesianism can be found on pages 169-172, 281, and 314 of &amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;21&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;—and Tetlock’s experience and interviews with superforecasters seems to bear this out. Superforecasters seem to do many small updates, with occasional big updates, just as Bayesianism would predict. They recommend thinking in the Bayesian way, and often explicitly make Bayesian calculations. They are good at breaking down difficult questions into more manageable parts and chaining the probabilities together properly.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 2. Discussion: Relevance to AI Forecasting =====
+ 
+ 
+ ==== 2.1. Limitations ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;A major limitation is that the forecasts were mainly on geopolitical events only a few years in the future at most. (Uncertain geopolitical events seem to be somewhat predictable up to two years out but much more difficult to predict five years out.) &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-22-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-22-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;Tetlock admits that &amp;amp;amp;#8220;there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious&amp;amp;amp;#8230; These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out.&amp;amp;amp;#8221; (&amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt; p243) &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;22&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; So evidence from the GJP may not generalize to forecasting other types of events (e.g. technological progress and social  consequences) or events further in the future.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;That said, the forecasting best practices discovered by this research are not overtly specific to geopolitics or near-term events.  Also, geopolitical questions are diverse and accuracy on some was highly correlated with accuracy on others. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-23-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-23-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;&amp;amp;amp;#8220;There are several ways to look for individual consistency across questions. We sorted questions on the basis of response format (binary, multinomial, conditional, ordered), region (Eurzone, Latin America, China, etc.), and duration of question (short, medium, and long). We computed accuracy scores for each individual on each variable within each set (e.g., binary, multinomial, conditional, and ordered) and then constructed correlation matrices. For all three question types, correlations were positive&amp;amp;amp;#8230; Then we conducted factor analyses. For each question type, a large proportion of the variance was captured by a single factor, consistent with the hypothesis that one underlying dimension was necessary to capture correlations among response formats, regions, and question duration.&amp;amp;amp;#8221; From &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers et al 2015&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;23&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock has ideas for how to handle longer-term, nebulous questions. He calls it “Bayesian Question Clustering.” (&amp;lt;/span&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Superforecasting&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;263) The idea is to take the question you really want to answer and look for more precise questions that are evidentially relevant to the question you care about. Tetlock intends to test the effectiveness of this idea in future research.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 2.2 Value ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The benefits of following these best practices (including identifying and aggregating the best forecasters) appear to be substantial: Superforecasters predicting events 300 days in the future were more accurate than regular forecasters predicting events 100 days in the future, and the GJP did even better. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-24-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-24-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p94. Later, in the &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;edge.org seminar&amp;amp;lt;/a&amp;amp;gt;, Tetlock says “In some other ROC curves—receiver operator characteristic curves, from signal detection theory—that Mark Steyvers at UCSD constructed—superforecasters could assign probabilities 400 days out about as well as regular people could about eighty days out.” The quote is accompanied by a &amp;amp;lt;a href=&amp;quot;https://www.edge.org/3rd_culture/Master%20Class%202015/Slide040.jpg&amp;quot;&amp;amp;gt;graph&amp;amp;lt;/a&amp;amp;gt;; unfortunately, it’s hard to interpret. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;24&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;If these benefits generalize beyond the short-term and beyond geopolitics—e.g. to long-term technological and societal development—then this research is highly useful to almost everyone. Even if the benefits do not generalize beyond the near-term, these best practices may still be well worth adopting. For example, it would be extremely useful to have 300 days of warning before strategically important AI milestones are reached, rather than 100.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 3. Contributions =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Research, analysis, and writing were done by Daniel Kokotajlo. Katja Grace and Justis Mills contributed feedback and editing. Tegan McCaslin, Carl Shulman, and Jacob Lagerros contributed feedback.&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 4. Footnotes =====
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p104 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p104 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-3-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Technically the forecasters were paid, up to $250 per season. (&amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p72) However their payments did not depend on how accurate they were or how much effort they put in, beyond the minimum. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-3-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The table is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. “Del time” is deliberation time. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; “Nonetheless, as we saw in the structural model, and confirm here, the best model uses dispositional, situational, and behavioral variables. The combination produced a multiple correlation of .64.” This is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-7-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; On the &amp;lt;a href=&amp;quot;https://goodjudgment.com/science.html&amp;quot;&amp;gt;webpage&amp;lt;/a&amp;gt;, it says forecasters with better track-records and those who update more frequently get weighted more. In &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;these slides, &amp;lt;/a&amp;gt;Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-7-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-8-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The academic papers on this topic are &amp;lt;a href=&amp;quot;https://www.sciencedirect.com/science/article/pii/S0169207013001635&amp;quot;&amp;gt;Satopaa et al 2013&amp;lt;/a&amp;gt; and &amp;lt;a href=&amp;quot;http://pubsonline.informs.org/doi/abs/10.1287/deca.2014.0293&amp;quot;&amp;gt;Baron et al 2014&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-8-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-9-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; According to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, &amp;lt;i&amp;gt;a priori&amp;lt;/i&amp;gt; one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-9-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-10-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p18. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-10-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-11-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;this seminar&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-11-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-12-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; For example, in year 2 one superforecaster beat the extremizing algorithm. More generally, as discussed in &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;this seminar&amp;lt;/a&amp;gt;, the aggregation algorithm produces the greatest improvement with ordinary forecasters; the superforecasters were good enough that it didn’t help much. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-12-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-13-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;http://journal.sjdm.org/16/16511/jdm16511.pdf&amp;quot;&amp;gt;Chang et al 2016&amp;lt;/a&amp;gt;. The average brier score of answers tagged “comparison classes” was 0.17, while the next-best tag averaged 0.26.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-13-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-14-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p191 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-14-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-15-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; There is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people. From &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv&amp;quot;&amp;gt;edge.org&amp;lt;/a&amp;gt;: &amp;lt;i&amp;gt;Mellers:&amp;lt;/i&amp;gt; “We have given them lots of Kahneman and Tversky-like problems to see if they fall prey to the same sorts of biases and errors. The answer is sort of, some of them do, but not as many. It’s not nearly as frequent as you see with the rest of us ordinary mortals. The other thing that’s interesting is they don’t make the kinds of mistakes that regular people make instead of the right answer. They do something that’s a little bit more thoughtful. They integrate base rates with case-specific information a little bit more.”  &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-15-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-16-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p281 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-16-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-17-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; See e.g. page 284 of &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt;, and the entirety of chapter 9. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-17-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-18-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; See Chapter 5: “Ultimately, it’s not the number crunching power that counts. It’s how you use it. … You’ve Fermi-ized the question, consulted the outside view, and now, finally, you can consult the inside view … So you have an outside view and an inside view. Now they have to be merged. …” &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-18-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-19-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; 44 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-19-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-20-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The superforecasters had a calibration of 0.01, which means that the average difference between a probability they use and the true frequency of occurrence is 0.01. This is from &amp;lt;a href=&amp;quot;https://www.researchgate.net/publication/277087515_Identifying_and_Cultivating_Superforecasters_as_a_Method_of_Improving_Probabilistic_Predictions&amp;quot;&amp;gt;Mellers et al 2015&amp;lt;/a&amp;gt;. The fact about rounding their predictions is from &amp;lt;a href=&amp;quot;https://academic.oup.com/isq/article-abstract/62/2/410/4944059?redirectedFrom=fulltext&amp;quot;&amp;gt;Friedman et al 2018&amp;lt;/a&amp;gt;. EDIT: Seems I was wrong, thanks to this commenter for noticing.https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-20-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-21-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; For an excellent introduction to Bayesian reasoning and its theoretical foundations, see Strevens’ textbook-like &amp;lt;a href=&amp;quot;http://www.strevens.org/bct/&amp;quot;&amp;gt;lecture notes&amp;lt;/a&amp;gt;. Some of the facts summarized in this paragraph about Superforecasters and Bayesianism can be found on pages 169-172, 281, and 314 of &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-21-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-22-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Tetlock admits that “there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious… These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out.” (&amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p243) &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-22-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-23-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; “There are several ways to look for individual consistency across questions. We sorted questions on the basis of response format (binary, multinomial, conditional, ordered), region (Eurzone, Latin America, China, etc.), and duration of question (short, medium, and long). We computed accuracy scores for each individual on each variable within each set (e.g., binary, multinomial, conditional, and ordered) and then constructed correlation matrices. For all three question types, correlations were positive… Then we conducted factor analyses. For each question type, a large proportion of the variance was captured by a single factor, consistent with the hypothesis that one underlying dimension was necessary to capture correlations among response formats, regions, and question duration.” From &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers et al 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-23-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-24-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;  &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p94. Later, in the &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;edge.org seminar&amp;lt;/a&amp;gt;, Tetlock says “In some other ROC curves—receiver operator characteristic curves, from signal detection theory—that Mark Steyvers at UCSD constructed—superforecasters could assign probabilities 400 days out about as well as regular people could about eighty days out.” The quote is accompanied by a &amp;lt;a href=&amp;quot;https://www.edge.org/3rd_culture/Master%20Class%202015/Slide040.jpg&amp;quot;&amp;gt;graph&amp;lt;/a&amp;gt;; unfortunately, it’s hard to interpret. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-24-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,443 @@
+ ====== Evidence on good forecasting practices from the Good Judgment Project ======
+ 
+ // Published 07 February, 2019; last updated 17 July, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;According to experience and data from the Good Judgment Project, the following are associated with successful forecasting, in rough decreasing order of combined importance and confidence:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Past performance in the same broad domain&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Making more predictions on the same question&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Deliberation time&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Collaboration on teams&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Intelligence&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Domain expertise&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Having taken a one-hour training module on these topics&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Cognitive reflection’ test scores&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Active open-mindedness’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Aggregation of individual judgments&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Use of precise probabilistic predictions&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Use of ‘the outside view’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Fermi-izing’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;‘Bayesian reasoning’&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Practice&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Details =====
+ 
+ 
+ ==== 1. 1. Process ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The Good Judgment Project (GJP) was the winning team in IARPA’s 2011-2015 forecasting tournament. In the tournament, six teams assigned probabilistic answers to hundreds of questions about geopolitical events months to a year in the future. Each competing team used a different method for coming up with their guesses, so the tournament helps us to evaluate different forecasting methods.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The GJP team, led by Philip Tetlock and Barbara Mellers, gathered thousands of online volunteers and had them answer the tournament questions. They then made their official forecasts by aggregating these answers. In the process, the team collected data about the patterns of performance in their volunteers, and experimented with aggregation methods and improvement interventions. For example, they ran an RCT to test the effect of a short training program on forecasting accuracy. They especially focused on identifying and making use of the most successful two percent of forecasters, dubbed ‘superforecasters’.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock’s book&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Superforecasting&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;describes this process and Tetlock’s resulting understanding of how to forecast well.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.2. Correlates of successful forecasting ====
+ 
+ 
+ === 1.2.1. Past performance ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Roughly 70% of the superforecasters maintained their status from one year to the next &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p104 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Across all the forecasters, the correlation between performance in one year and performance in the next year was 0.65 &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p104 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;These high correlations are particularly impressive because the forecasters were online volunteers; presumably substantial variance year-to-year came from forecasters throttling down their engagement due to fatigue or changing life circumstances &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-3-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-3-1283&amp;quot; title=&amp;quot; Technically the forecasters were paid, up to $250 per season. (&amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p72) However their payments did not depend on how accurate they were or how much effort they put in, beyond the minimum.&amp;amp;amp;nbsp;&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ === 1.2.2. Behavioral and dispositional variables ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Table 2  depicts the correlations between measured variables amongst GJP’s volunteers in the first two years of the tournament &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-4-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-4-1283&amp;quot; title=&amp;#039; The table is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;. “Del time” is deliberation time.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; Each is described in more detail below.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The first column shows the relationship between each variable and standardized&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Brier_score&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Brier score&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;, which is a measure of inaccuracy: higher Brier scores mean less accuracy, so negative correlations are good. “Ravens” is an IQ test; “Del time” is deliberation time, and “teams” is whether or not the forecaster was assigned to a team. “Actively open-minded thinking” is an attempt to measure “the tendency to evaluate arguments and evidence without undue bias from one’s own prior beliefs—and with recognition of the fallibility of one’s judgment.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-1283&amp;quot; title=&amp;#039; “Nonetheless, as we saw in the structural model, and confirm here, the best model uses dispositional, situational, and behavioral variables. The combination produced a multiple correlation of .64.” This is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The authors conducted various statistical analyses to explore the relationships between these variables. They computed a structural equation model to predict a forecaster’s accuracy:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Yellow ovals are latent dispositional variables, yellow rectangles are observed dispositional variables, pink rectangles are experimentally manipulated situational variables, and green rectangles are observed behavioral variables. This model has a &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Multiple_correlation&amp;quot;&amp;gt;multiple correlation&amp;lt;/a&amp;gt; of 0.64.&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers &amp;amp;lt;i&amp;amp;gt;et al&amp;amp;lt;/i&amp;amp;gt; 2015&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;As these data indicate, domain knowledge, intelligence, active open-mindedness, and working in teams each contribute substantially to accuracy. We can also conclude that effort helps, because deliberation time and number of predictions made per question (“belief updating”) both improved accuracy. Finally, training also helps. This is especially surprising because the training module lasted only an hour and its effects persisted for at least a year. The module included content about probabilistic reasoning, using the outside view, avoiding biases, and more.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.3. Aggregation algorithms ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;GJP made their official predictions by aggregating and extremizing the predictions of their volunteers.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The aggregation algorithm was elitist, meaning that it weighted more heavily people who were better on various metrics.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-7-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-7-1283&amp;quot; title=&amp;#039; On the &amp;amp;lt;a href=&amp;quot;https://goodjudgment.com/science.html&amp;quot;&amp;amp;gt;webpage&amp;amp;lt;/a&amp;amp;gt;, it says forecasters with better track-records and those who update more frequently get weighted more. In &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;these slides,&amp;amp;amp;nbsp;&amp;amp;lt;/a&amp;amp;gt;Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; The extremizing step pushes the aggregated judgment closer to 1 or 0, to make it more confident. The degree to which they extremize depends on how diverse and sophisticated the pool of forecasters is.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-8-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-8-1283&amp;quot; title=&amp;#039; The academic papers on this topic are &amp;amp;lt;a href=&amp;quot;https://www.sciencedirect.com/science/article/pii/S0169207013001635&amp;quot;&amp;amp;gt;Satopaa et al 2013&amp;amp;lt;/a&amp;amp;gt; and &amp;amp;lt;a href=&amp;quot;http://pubsonline.informs.org/doi/abs/10.1287/deca.2014.0293&amp;quot;&amp;amp;gt;Baron et al 2014&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; Whether extremizing is a good idea is still controversial.  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-9-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-9-1283&amp;quot; title=&amp;quot; According to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, &amp;amp;lt;i&amp;amp;gt;a priori &amp;amp;lt;/i&amp;amp;gt;one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time.&amp;amp;amp;nbsp;&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;9&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;GJP beat all of the other teams.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;They consistently beat the control group—which was a forecast made by averaging ordinary forecasters—by more than 60%.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-10-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-10-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p18. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; They  also beat a prediction market inside the intelligence community—populated by professional analysts with access to classified information—by 25-30%. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;this seminar&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;That said, individual superforecasters did almost as well, so the elitism of the algorithm may account for a lot of its success.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-12-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-12-1283&amp;quot; title=&amp;#039; For example, in year 2 one superforecaster beat the extremizing algorithm. More generally, as discussed in &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;this seminar&amp;amp;lt;/a&amp;amp;gt;, the aggregation algorithm produces the greatest improvement with ordinary forecasters; the superforecasters were good enough that it didn’t help much.&amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;12&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.4. Outside View ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The forecasters who received training were asked to record, for each prediction, which parts of the training they used to make it. Some parts of the training—e.g. “Post-mortem analysis”—were correlated with inaccuracy, but others—most notably “Comparison classes”—were correlated with accuracy.&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-13-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-13-1283&amp;quot; title=&amp;#039; This is from &amp;amp;lt;a href=&amp;quot;http://journal.sjdm.org/16/16511/jdm16511.pdf&amp;quot;&amp;amp;gt;Chang et al 2016&amp;amp;lt;/a&amp;amp;gt;. The average brier score of answers tagged “comparison classes” was 0.17, while the next-best tag averaged 0.26.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;13&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;  ‘Comparison classes’ is another term for&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Reference_class_forecasting&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;reference-class forecasting&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;, also known as ‘the outside view’. It is the method of assigning a probability by straightforward extrapolation from similar past situations and their outcomes.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.5. Tetlock’s “Portrait of the modal superforecaster” ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This subsection and those that follow will lay out some more qualitative results, things that Tetlock recommends on the basis of his research and interviews with superforecasters. Here is Tetlock’s “portrait of the modal superforecaster:” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-14-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-14-1283&amp;quot; title=&amp;quot; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p191 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;14&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Philosophic outlook:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Cautious:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Nothing is certain.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Humble:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Reality is infinitely complex.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Nondeterministic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Whatever happens is not meant to be and does not have to happen.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Abilities &amp;amp;amp; thinking styles:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Actively open-minded:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Beliefs are hypotheses to be tested, not treasures to be protected.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Intelligent and knowledgeable, with a “Need for Cognition”:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Intellectually curious, enjoy puzzles and mental challenges.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Reflective:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Introspective and self-critical&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Numerate:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Comfortable with numbers&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Methods of forecasting:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Pragmatic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Not wedded to any idea or agenda&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Analytical:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Capable of stepping back from the tip-of-your-nose perspective and considering other views&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Dragonfly-eyed:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Value diverse views and synthesize them into their own&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Probabilistic:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Judge using many grades of maybe&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Thoughtful updaters:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;When facts change, they change their minds&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Good intuitive psychologists:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Aware of the value of checking thinking for cognitive and emotional biases &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-15-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-15-1283&amp;quot; title=&amp;#039; There is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people. From &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv&amp;quot;&amp;amp;gt;edge.org&amp;amp;lt;/a&amp;amp;gt;: &amp;amp;lt;i&amp;amp;gt;Mellers: &amp;amp;lt;/i&amp;amp;gt;“We have given them lots of Kahneman and Tversky-like problems to see if they fall prey to the same sorts of biases and errors. The answer is sort of, some of them do, but not as many. It’s not nearly as frequent as you see with the rest of us ordinary mortals. The other thing that’s interesting is they don’t make the kinds of mistakes that regular people make instead of the right answer. They do something that’s a little bit more thoughtful. They integrate base rates with case-specific information a little bit more.” &amp;amp;amp;nbsp;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;15&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;Work ethic:&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Growth mindset:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Believe it’s possible to get better&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;b&amp;gt;Grit:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Determined to keep at it however long it takes&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.6. Tetlock’s “Ten Commandments for Aspiring Superforecasters:” ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This advice is given at the end of the book, and may make less sense to someone who hasn’t read the book. A full transcript of these commandments can be found&amp;lt;/span&amp;gt; &amp;lt;a href=&amp;quot;https://www.lesswrong.com/posts/dvYeSKDRd68GcrWoe/ten-commandments-for-aspiring-superforecasters&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;here&amp;lt;/span&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;; this is a summary:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(1) Triage:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Don’t waste time on questions that are “clocklike” where a rule of thumb can get you pretty close to the correct answer, or “cloudlike” where even fancy models can’t beat a dart-throwing chimp.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(2) Break seemingly intractable problems into tractable sub-problems:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This is how Fermi estimation works. One related piece of advice is “be wary of accidentally substituting an easy question for a hard one,” e.g. substituting “Would Israel be willing to assassinate Yasser Arafat?” for “Will at least one of the tests for polonium in Arafat’s body turn up positive?”&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(3) Strike the right balance between inside and outside views:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;In particular,&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;first&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;anchor with the outside view and&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;then&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;adjust using the inside view.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(4) Strike the right balance between under- and overreacting to evidence:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Usually do many small updates, but occasionally do big updates when the situation calls for it. Remember to think about P(E|H)/P(E|~H); remember to avoid the base-rate fallacy. “Superforecasters aren’t perfect Bayesian predictors but they are much better than most of us.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-16-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-16-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;&amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p281 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;16&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(5) Look for the clashing causal forces at work in each problem:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This is the “dragonfly eye perspective,” which is where you attempt to do a sort of mental wisdom of the crowds: Have tons of different causal models and aggregate their judgments. Use “Devil’s advocate” reasoning. If you think that P, try hard to convince yourself that not-P. You should find yourself saying “On the one hand… on the other hand… on the third hand…” a lot.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(6) Strive to distinguish as many degrees of doubt as the problem permits but no more.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(7) Strike the right balance between under- and overconfidence, between prudence and decisiveness.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(8) Look for the errors behind your mistakes but beware of rearview-mirror hindsight biases.&amp;lt;/b&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(9) Bring out the best in others and let others bring out the best in you.&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The book spent a whole chapter on this, using the Wehrmacht as an extended case study on good team organization. One pervasive guiding principle is “Don’t tell people how to do things; tell them what you want accomplished, and they’ll surprise you with their ingenuity in doing it.” The other pervasive guiding principle is “Cultivate a culture in which people—even subordinates—are encouraged to dissent and give counterarguments.” &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-17-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-17-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;See e.g. page 284 of &amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt;, and the entirety of chapter 9. &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;17&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(10) Master the error-balancing bicycle:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This one should have been called practice, practice, practice. Tetlock says that reading the news and generating probabilities isn’t enough; you need to actually score your predictions so that you know how wrong you were.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;b&amp;gt;(11) Don’t treat commandments as commandments:&amp;lt;/b&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock’s point here is simply that you should use your judgment about whether to follow a commandment or not; sometimes they should be overridden.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.7. Recipe for Making Predictions ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock describes how superforecasters go about making their predictions.&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-18-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-18-1283&amp;quot; title=&amp;quot; See Chapter 5: “Ultimately, it’s not the number crunching power that counts. It’s how you use it. … You’ve Fermi-ized the question, consulted the outside view, and now, finally, you can consult the inside view … So you have an outside view and an inside view. Now they have to be merged. …” &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;18&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; Here is an attempt at a summary:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Sometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Next, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Repeat steps 1 – 3 until you hit diminishing returns.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 1.8. Bayesian reasoning &amp;amp; precise probabilistic forecasts ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Humans normally express uncertainty with terms like “maybe” and “almost certainly” and “a significant chance.” Tetlock advocates for thinking and speaking in probabilities instead. He recounts many anecdotes of misunderstandings that might have been avoided this way. For example:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;In 1961, when the CIA was planning to topple the Castro government by landing a small army of Cuban expatriates at the Bay of Pigs, President John F. Kennedy turned to the military for an unbiased assessment. The Joint Chiefs of Staff concluded that the plan had a “fair chance” of success. The man who wrote the words “fair chance” later said he had in mind odds of 3 to 1 against success. But Kennedy was never told precisely what “fair chance” meant and, not unreasonably, he took it to be a much more positive assessment. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-19-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-19-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;&amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt; 44 &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;19&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;This example hints at another advantage of probabilistic judgments: It’s harder to weasel out of them afterwards, and therefore easier to keep score. Keeping score is crucial for getting feedback from reality, which is crucial for building up expertise.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;A standard criticism of using probabilities is that they merely conceal uncertainty rather than quantify it—after all, the numbers you pick are themselves guesses. This may be true for people who haven’t practiced much, but it isn’t true for superforecasters, who are impressively well-calibrated and whose accuracy scores decrease when you round their predictions to the nearest 0.05. (EDIT: This should be 0.1)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-20-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-20-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;The superforecasters had a calibration of 0.01, which means that the average difference between a probability they use and the true frequency of occurrence is 0.01. This is from &amp;amp;lt;a href=&amp;quot;https://www.researchgate.net/publication/277087515_Identifying_and_Cultivating_Superforecasters_as_a_Method_of_Improving_Probabilistic_Predictions&amp;quot;&amp;amp;gt;Mellers et al 2015&amp;amp;lt;/a&amp;amp;gt;. The fact about rounding their predictions is from &amp;amp;lt;a href=&amp;quot;https://academic.oup.com/isq/article-abstract/62/2/410/4944059?redirectedFrom=fulltext&amp;quot;&amp;amp;gt;Friedman et al 2018&amp;amp;lt;/a&amp;amp;gt;. EDIT: Seems I was wrong, thanks to this commenter for noticing.https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;20&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Bayesian reasoning is a natural next step once you are thinking and talking probabilities—it is the theoretical ideal in several important ways &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-21-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-21-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;For an excellent introduction to Bayesian reasoning and its theoretical foundations, see Strevens’ textbook-like &amp;amp;lt;a href=&amp;quot;http://www.strevens.org/bct/&amp;quot;&amp;amp;gt;lecture notes&amp;amp;lt;/a&amp;amp;gt;. Some of the facts summarized in this paragraph about Superforecasters and Bayesianism can be found on pages 169-172, 281, and 314 of &amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;21&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;—and Tetlock’s experience and interviews with superforecasters seems to bear this out. Superforecasters seem to do many small updates, with occasional big updates, just as Bayesianism would predict. They recommend thinking in the Bayesian way, and often explicitly make Bayesian calculations. They are good at breaking down difficult questions into more manageable parts and chaining the probabilities together properly.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 2. Discussion: Relevance to AI Forecasting =====
+ 
+ 
+ ==== 2.1. Limitations ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;A major limitation is that the forecasts were mainly on geopolitical events only a few years in the future at most. (Uncertain geopolitical events seem to be somewhat predictable up to two years out but much more difficult to predict five years out.) &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-22-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-22-1283&amp;quot; title=&amp;quot;&amp;amp;amp;nbsp;Tetlock admits that &amp;amp;amp;#8220;there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious&amp;amp;amp;#8230; These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out.&amp;amp;amp;#8221; (&amp;amp;lt;i&amp;amp;gt;Superforecasting&amp;amp;lt;/i&amp;amp;gt; p243) &amp;quot;&amp;gt;&amp;lt;sup&amp;gt;22&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt; So evidence from the GJP may not generalize to forecasting other types of events (e.g. technological progress and social  consequences) or events further in the future.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;That said, the forecasting best practices discovered by this research are not overtly specific to geopolitics or near-term events.  Also, geopolitical questions are diverse and accuracy on some was highly correlated with accuracy on others. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-23-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-23-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp;&amp;amp;amp;#8220;There are several ways to look for individual consistency across questions. We sorted questions on the basis of response format (binary, multinomial, conditional, ordered), region (Eurzone, Latin America, China, etc.), and duration of question (short, medium, and long). We computed accuracy scores for each individual on each variable within each set (e.g., binary, multinomial, conditional, and ordered) and then constructed correlation matrices. For all three question types, correlations were positive&amp;amp;amp;#8230; Then we conducted factor analyses. For each question type, a large proportion of the variance was captured by a single factor, consistent with the hypothesis that one underlying dimension was necessary to capture correlations among response formats, regions, and question duration.&amp;amp;amp;#8221; From &amp;amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;amp;gt;Mellers et al 2015&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;23&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Tetlock has ideas for how to handle longer-term, nebulous questions. He calls it “Bayesian Question Clustering.” (&amp;lt;/span&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Superforecasting&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;263) The idea is to take the question you really want to answer and look for more precise questions that are evidentially relevant to the question you care about. Tetlock intends to test the effectiveness of this idea in future research.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ==== 2.2 Value ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;The benefits of following these best practices (including identifying and aggregating the best forecasters) appear to be substantial: Superforecasters predicting events 300 days in the future were more accurate than regular forecasters predicting events 100 days in the future, and the GJP did even better. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-24-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-24-1283&amp;quot; title=&amp;#039;&amp;amp;amp;nbsp; &amp;amp;lt;i&amp;amp;gt;Superforecasting &amp;amp;lt;/i&amp;amp;gt;p94. Later, in the &amp;amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;amp;gt;edge.org seminar&amp;amp;lt;/a&amp;amp;gt;, Tetlock says “In some other ROC curves—receiver operator characteristic curves, from signal detection theory—that Mark Steyvers at UCSD constructed—superforecasters could assign probabilities 400 days out about as well as regular people could about eighty days out.” The quote is accompanied by a &amp;amp;lt;a href=&amp;quot;https://www.edge.org/3rd_culture/Master%20Class%202015/Slide040.jpg&amp;quot;&amp;amp;gt;graph&amp;amp;lt;/a&amp;amp;gt;; unfortunately, it’s hard to interpret. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;24&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;If these benefits generalize beyond the short-term and beyond geopolitics—e.g. to long-term technological and societal development—then this research is highly useful to almost everyone. Even if the benefits do not generalize beyond the near-term, these best practices may still be well worth adopting. For example, it would be extremely useful to have 300 days of warning before strategically important AI milestones are reached, rather than 100.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 3. Contributions =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400;&amp;quot;&amp;gt;Research, analysis, and writing were done by Daniel Kokotajlo. Katja Grace and Justis Mills contributed feedback and editing. Tegan McCaslin, Carl Shulman, and Jacob Lagerros contributed feedback.&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ ===== 4. Footnotes =====
+ 
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p104 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p104 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-3-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Technically the forecasters were paid, up to $250 per season. (&amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p72) However their payments did not depend on how accurate they were or how much effort they put in, beyond the minimum. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-3-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The table is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. “Del time” is deliberation time. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; “Nonetheless, as we saw in the structural model, and confirm here, the best model uses dispositional, situational, and behavioral variables. The combination produced a multiple correlation of .64.” This is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers &amp;lt;i&amp;gt;et al&amp;lt;/i&amp;gt; 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-7-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; On the &amp;lt;a href=&amp;quot;https://goodjudgment.com/science.html&amp;quot;&amp;gt;webpage&amp;lt;/a&amp;gt;, it says forecasters with better track-records and those who update more frequently get weighted more. In &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;these slides, &amp;lt;/a&amp;gt;Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-7-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-8-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The academic papers on this topic are &amp;lt;a href=&amp;quot;https://www.sciencedirect.com/science/article/pii/S0169207013001635&amp;quot;&amp;gt;Satopaa et al 2013&amp;lt;/a&amp;gt; and &amp;lt;a href=&amp;quot;http://pubsonline.informs.org/doi/abs/10.1287/deca.2014.0293&amp;quot;&amp;gt;Baron et al 2014&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-8-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-9-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; According to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, &amp;lt;i&amp;gt;a priori&amp;lt;/i&amp;gt; one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-9-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-10-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p18. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-10-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-11-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;this seminar&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-11-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-12-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; For example, in year 2 one superforecaster beat the extremizing algorithm. More generally, as discussed in &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;this seminar&amp;lt;/a&amp;gt;, the aggregation algorithm produces the greatest improvement with ordinary forecasters; the superforecasters were good enough that it didn’t help much. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-12-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-13-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; This is from &amp;lt;a href=&amp;quot;http://journal.sjdm.org/16/16511/jdm16511.pdf&amp;quot;&amp;gt;Chang et al 2016&amp;lt;/a&amp;gt;. The average brier score of answers tagged “comparison classes” was 0.17, while the next-best tag averaged 0.26.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-13-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-14-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p191 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-14-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-15-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; There is experimental evidence that superforecasters are less prone to standard cognitive science biases than ordinary people. From &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-iv&amp;quot;&amp;gt;edge.org&amp;lt;/a&amp;gt;: &amp;lt;i&amp;gt;Mellers:&amp;lt;/i&amp;gt; “We have given them lots of Kahneman and Tversky-like problems to see if they fall prey to the same sorts of biases and errors. The answer is sort of, some of them do, but not as many. It’s not nearly as frequent as you see with the rest of us ordinary mortals. The other thing that’s interesting is they don’t make the kinds of mistakes that regular people make instead of the right answer. They do something that’s a little bit more thoughtful. They integrate base rates with case-specific information a little bit more.”  &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-15-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-16-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p281 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-16-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-17-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; See e.g. page 284 of &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt;, and the entirety of chapter 9. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-17-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-18-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; See Chapter 5: “Ultimately, it’s not the number crunching power that counts. It’s how you use it. … You’ve Fermi-ized the question, consulted the outside view, and now, finally, you can consult the inside view … So you have an outside view and an inside view. Now they have to be merged. …” &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-18-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-19-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; 44 &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-19-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-20-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; The superforecasters had a calibration of 0.01, which means that the average difference between a probability they use and the true frequency of occurrence is 0.01. This is from &amp;lt;a href=&amp;quot;https://www.researchgate.net/publication/277087515_Identifying_and_Cultivating_Superforecasters_as_a_Method_of_Improving_Probabilistic_Predictions&amp;quot;&amp;gt;Mellers et al 2015&amp;lt;/a&amp;gt;. The fact about rounding their predictions is from &amp;lt;a href=&amp;quot;https://academic.oup.com/isq/article-abstract/62/2/410/4944059?redirectedFrom=fulltext&amp;quot;&amp;gt;Friedman et al 2018&amp;lt;/a&amp;gt;. EDIT: Seems I was wrong, thanks to this commenter for noticing.https://www.metaculus.com/questions/4166/the-lightning-round-tournament-comparing-metaculus-forecasters-to-infectious-disease-experts/#comment-28756&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-20-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-21-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; For an excellent introduction to Bayesian reasoning and its theoretical foundations, see Strevens’ textbook-like &amp;lt;a href=&amp;quot;http://www.strevens.org/bct/&amp;quot;&amp;gt;lecture notes&amp;lt;/a&amp;gt;. Some of the facts summarized in this paragraph about Superforecasters and Bayesianism can be found on pages 169-172, 281, and 314 of &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-21-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-22-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Tetlock admits that “there is no evidence that geopolitical or economic forecasters can predict anything ten years out beyond the excruciatingly obvious… These limits on predictability are the predictable results of the butterfly dynamics of nonlinear systems. In my EPJ research, the accuracy of expert predictions declined toward chance five years out.” (&amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p243) &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-22-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-23-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; “There are several ways to look for individual consistency across questions. We sorted questions on the basis of response format (binary, multinomial, conditional, ordered), region (Eurzone, Latin America, China, etc.), and duration of question (short, medium, and long). We computed accuracy scores for each individual on each variable within each set (e.g., binary, multinomial, conditional, and ordered) and then constructed correlation matrices. For all three question types, correlations were positive… Then we conducted factor analyses. For each question type, a large proportion of the variance was captured by a single factor, consistent with the hypothesis that one underlying dimension was necessary to capture correlations among response formats, regions, and question duration.” From &amp;lt;a href=&amp;quot;https://www.apa.org/pubs/journals/releases/xap-0000040.pdf&amp;quot;&amp;gt;Mellers et al 2015&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-23-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-24-1283&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;  &amp;lt;i&amp;gt;Superforecasting&amp;lt;/i&amp;gt; p94. Later, in the &amp;lt;a href=&amp;quot;https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii&amp;quot;&amp;gt;edge.org seminar&amp;lt;/a&amp;gt;, Tetlock says “In some other ROC curves—receiver operator characteristic curves, from signal detection theory—that Mark Steyvers at UCSD constructed—superforecasters could assign probabilities 400 days out about as well as regular people could about eighty days out.” The quote is accompanied by a &amp;lt;a href=&amp;quot;https://www.edge.org/3rd_culture/Master%20Class%202015/Slide040.jpg&amp;quot;&amp;gt;graph&amp;lt;/a&amp;gt;; unfortunately, it’s hard to interpret. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-24-1283&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Fiction relevant to AI futurism</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/fiction_relevant_to_ai_futurism?rev=1667620518&amp;do=diff"/>
        <published>2022-11-05T03:55:18+00:00</published>
        <updated>2022-11-05T03:55:18+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/fiction_relevant_to_ai_futurism?rev=1667620518&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -60,9 +60,9 @@
  
  ==== Collection ====
  
  
- The collection can also be seen full screen [[&amp;quot;https://airtable.com/shr5EIpLNHB7o2q9Z/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;amp;viewControls=on&amp;quot;|here]] or as a table [[https://airtable.com/shrVnjq9U53R5nrxO&amp;quot;|here]]
+ The collection can also be seen [[https://airtable.com/shr5EIpLNHB7o2q9Z/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;amp;viewControls=on&amp;quot;|here]] or as a table [[https://airtable.com/shrVnjq9U53R5nrxO&amp;quot;|here]]
  
  
  
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -60,9 +60,9 @@
  
  ==== Collection ====
  
  
- The collection can also be seen full screen [[&amp;quot;https://airtable.com/shr5EIpLNHB7o2q9Z/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;amp;viewControls=on&amp;quot;|here]] or as a table [[https://airtable.com/shrVnjq9U53R5nrxO&amp;quot;|here]]
+ The collection can also be seen [[https://airtable.com/shr5EIpLNHB7o2q9Z/tblMVjRvMKVNkoZVg?backgroundColor=cyan&amp;amp;amp;viewControls=on&amp;quot;|here]] or as a table [[https://airtable.com/shrVnjq9U53R5nrxO&amp;quot;|here]]
  
  
  
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Glossary of AI Risk Terminology and common AI terms</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/glossary_of_ai_risk_terminology_and_common_ai_terms?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/glossary_of_ai_risk_terminology_and_common_ai_terms?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,649 @@
+ ====== Glossary of AI Risk Terminology and common AI terms ======
+ 
+ // Published 30 October, 2015; last updated 21 January, 2022 //
+ 
+ 
+ ===== Terms =====
+ 
+ 
+ ==== A ====
+ 
+ 
+ === AI timeline ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An expectation about how much time will lapse before important AI events, especially the advent of &amp;lt;em&amp;gt;&amp;lt;a href=&amp;quot;/doku.php?id=clarifying_concepts:human-level_ai&amp;quot;&amp;gt;human-level AI&amp;lt;/a&amp;gt;&amp;lt;/em&amp;gt; or a similar milestone. The term can also refer to the actual periods of time (which are not yet known), rather than an expectation about them.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Artificial General Intelligence (also, AGI) ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Skill at performing intellectual tasks across at least the range of variety that a human being is capable of. As opposed to skill at certain specific tasks (‘narrow’ AI). That is, synonymous with the more ambiguous &amp;lt;em&amp;gt;Human-Level AI&amp;lt;/em&amp;gt; for some meanings of the latter&amp;lt;em&amp;gt;.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Artificial Intelligence (also, AI) ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Behavior characteristic of human minds exhibited by man-made machines, and also the area of research focused on developing machines with such behavior. Sometimes used informally to refer to &amp;lt;em&amp;gt;human-level AI&amp;lt;/em&amp;gt; or another strong form of AI not yet developed.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Associative value accretion ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the AI acquires values using some machinery for synthesizing appropriate new values as it interacts with its environment, inspired by the way humans appear to acquire values (Bostrom 2014, p189-190)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-358&amp;quot; title=&amp;quot;Bostrom, Nick. &amp;amp;lt;em&amp;amp;gt;Superintelligence: Paths, Dangers, Strategies&amp;amp;lt;/em&amp;amp;gt;. 1st edition. Oxford: Oxford University Press, 2014.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Anthropic capture ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized control method in which the AI thinks it might be in a simulation, and so tries to behave in ways that will be rewarded by its simulators (Bostrom 2014 p134).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Anthropic reasoning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Reaching beliefs (posterior probabilities) over states of the world and your location in it, from priors over possible physical worlds (without your location specified) and evidence about your own situation. For an example where this is controversial, see &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Sleeping_Beauty_problem&amp;quot;&amp;gt;The Sleeping Beauty Problem&amp;lt;/a&amp;gt;. For more on the topic and its relation to AI, see &amp;lt;a href=&amp;quot;https://meteuphoric.wordpress.com/anthropic-principles/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Augmentation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to obtaining a superintelligence with desirable motives that consists of beginning with a creature with desirable motives (eg, a human), then making it smarter, instead of designing good motives from scratch (Bostrom 2014, p142).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== B ====
+ 
+ 
+ === Backpropagation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A fast method of computing the derivative of cost with respect to different parameters in a network, allowing for training neural nets through gradient descent. See &amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/chap2.html&amp;quot;&amp;gt;Neural Networks and Deep Learning&amp;lt;/a&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-358&amp;quot; title=&amp;#039;Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. &amp;amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/&amp;quot;&amp;amp;gt;http://neuralnetworksanddeeplearning.com&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; for a full explanation.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Boxing ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A control method that consists of constructing the AI’s environment so as to minimize interaction between the AI and the outside world. (Bostrom 2014, p129).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== C ====
+ 
+ 
+ === Capability control methods ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategies for avoiding undesirable outcomes by limiting what an AI can do (Bostrom 2014, p129).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Cognitive enhancement ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Improvements to an agent’s mental abilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Collective superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system” (Bostrom 2014, p54).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Computation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A sequence of mechanical operations intended to shed light on something other than this mechanical process itself, through an established relationship between the process and the object of interest.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === The common good principle ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals” (Bostrom 2014, p254).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Crucial consideration ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An idea with the potential to change our views substantially, such as by reversing the sign of the desirability of important interventions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== D ====
+ 
+ 
+ === Decisive strategic advantage ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategic superiority (by technology or other means) sufficient to enable an agent to unilaterally control most of the resources of the universe.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Direct specification ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which the programmers figure out what humans value, and code it into the AI (Bostrom 2014, p139-40).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Domesticity ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which the AI is given goals that limit the range of things it wants to interfere with (Bostrom 2014, p140-1).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== E ====
+ 
+ 
+ === Emulation modulation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Starting with brain emulations with approximately normal human motivations (see ‘Augmentation’), and modifying their motivations using drugs or digital drug analogs.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Evolutionary selection approach to value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to the value learning problem which obtains an AI with desirable values by iterative selection, the same way evolutionary selection produced humans  (Bostrom 2014, p187-8).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Existential risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Risk of an adverse outcome that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential &amp;lt;a href=&amp;quot;http://www.nickbostrom.com/existential/risks.html&amp;quot;&amp;gt;(Bostrom 2002)&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== F ====
+ 
+ 
+ === Feature ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A dimension in the vector space of activations in a single layer of a neural network (i.e. a neuron activation or linear combination of activations of different neurons)&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === First principal-agent problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The well-known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called ‘the principal agent problem’).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== G ====
+ 
+ 
+ === Genie ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that carries out a high level command, then waits for another (Bostrom 2014, p148).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== H ====
+ 
+ 
+ === Hardware overhang ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A situation where large amounts of hardware being used for other purposes become available for AI, usually posited to occur when AI reaches human-level capabilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that matches human capabilities in virtually every domain of interest.  Note that this term is used ambiguously; see &amp;lt;a href=&amp;quot;/doku.php?id=clarifying_concepts:human-level_ai&amp;quot;&amp;gt;our page on human-level AI&amp;lt;/a&amp;gt;.  &amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level hardware ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Hardware that matches the information-processing ability of the human brain.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level software ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Software that matches the algorithmic efficiency of the human brain, for doing the tasks the human brain does.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== I ====
+ 
+ 
+ === Impersonal perspective ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The view that one should act in the best interests of everyone, including those who may be brought into existence by one’s choices (see Person-affecting perspective).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Incentive methods ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategies for controlling an AI that consist of setting up the AI’s environment such that it is in the AI’s interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents (Bostrom 2014, p131).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Incentive wrapping ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Provisions in the goals given to an AI that allocate extra rewards to those who helped bring the AI about  (Bostrom 2014, p222-3).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Indirect normativity ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which we specify a way to specify what we value, instead of specifying what we value directly (Bostrom, p141-2).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Instrumental convergence thesis ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We can identify ‘convergent instrumental values’. That is, subgoals that are useful for a wide range of more fundamental goals, and in a wide range of situations (Bostrom 2014, p109).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Intelligence explosion ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized event in which an AI rapidly improves from ‘relatively modest’ to superhuman level (usually imagined to be as a result of recursive self-improvement).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== M ====
+ 
+ 
+ === Macrostructural development accelerator ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An imagined lever used in thought experiments which slows the large scale features of history (e.g. technological change, geopolitical dynamics) while leaving the small scale features the same.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Mind crime ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The mistreatment of morally relevant computations.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Moore’s Law ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Any of several different consistent, many-decade patterns of exponential improvement that have been observed in digital technologies. The classic version concerns the number of transistors in a dense integrated circuit, which was observed to be doubling around every year when the ‘law’ was formulated in &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Moore%27s_law&amp;quot;&amp;gt;1965&amp;lt;/a&amp;gt;. &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:glossary_of_ai_risk_terminology_and_common_ai_terms#Price-Performance_Moores_Law&amp;quot;&amp;gt;Price-Performance Moore’s Law&amp;lt;/a&amp;gt; is often relevant to AI forecasting.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Moral rightness (MR) AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI which seeks to do what is morally right.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Motivational scaffolding ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the seed AI is given simple goals, and these goals are replaced with more complex ones once it has developed sufficiently sophisticated representational structure (Bostrom 2014, p191-192).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Multipolar outcome ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A situation after the arrival of superintelligence in which no single agent controls most of the resources.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== O ====
+ 
+ 
+ === Optimization power ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The strength of a process’s ability to improve systems.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Oracle ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that only answers questions (Bostrom 2014, p145).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Orthogonality thesis ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== P ====
+ 
+ 
+ === Person-affecting perspective ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The view that one should act in the best interests of everyone who already exists, or who will exist independent of one’s choices (see Impersonal perspective).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Perverse instantiation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A solution to a posed goal (eg, make humans smile) that is destructive in unforeseen ways (eg, paralyzing face muscles in the smiling position).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Price-Performance Moore’s Law ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:trends_in_the_cost_of_computing&amp;quot;&amp;gt;observed pattern&amp;lt;/a&amp;gt; of relatively consistent, long term, exponential price decline for computation.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Principle of differential technological development ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Retard the development of dangerous and harmful technologies, especially the ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risk posed by nature or by other technologies” (Bostrom 2014, p230).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Principle of epistemic deference ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than our to be true.  We should therefore defer to the superintelligence’s position whenever feasible” (Bostrom 2014, p226).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Q ====
+ 
+ 
+ === Quality superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system that is at least as fast as a human mind and vastly qualitatively smarter” (Bostrom 2014, p56).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== R ====
+ 
+ 
+ === Recalcitrance ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;How difficult a system is to improve.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Recursive self-improvement ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The envisaged process of AI (perhaps a seed AI) iteratively improving itself.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Reinforcement learning approach to value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the AI is rewarded for behaviors that more closely approximate human values (Bostrom 2014, p188-9).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== S ====
+ 
+ 
+ === Second principal-agent problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The emerging problem of a developer wanting their AI to fulfill their wishes.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Seed AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A modest AI which can bootstrap into an impressive AI by improving its own architecture.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Singleton ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An agent that is internally coordinated and has no opponents.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Sovereign ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that acts autonomously in the world, in pursuit of potentially long range objectives (Bostrom 2014, p148).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Speed superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system that can do all that a human intellect can do, but much faster” (Bostrom 2014, p53).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === State risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A risk that comes from being in a certain state, such that the amount of risk is a function of the time spent there. For example, the state of not having the technology to defend from asteroid impacts carries risk proportional to the time we spend in it.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Step risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A risk that comes from making a transition. Here the amount of risk is not a simple function of how long the transition takes.  For example, traversing a minefield is not safer if done more quickly.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Stunting ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A control method that consists of limiting the AI’s capabilities, for instance as by limiting the AI’s access to information (Bostrom 2014, p135).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Bostrom 2014, p22).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== T ====
+ 
+ 
+ === Takeoff ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The event of the emergence of a superintelligence, often characterized by its speed: ‘slow takeoff’ takes decades or centuries, ‘moderate takeoff’ takes months or years and ‘fast takeoff’ takes minutes to days.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Technological completion conjecture ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;If scientific and technological development efforts do not cease, then all important basic capabilities that could be obtained through some possible technology will be obtained (Bostrom 2014, p127).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Technology coupling ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A predictable timing relationship between two technologies, such that hastening of the first technology will hasten the second, either because the second is a precursor or because it is a natural consequence (Bostrom 2014, p236-8) e.g. brain emulation is plausibly coupled to ‘neuromorphic’ AI, because the understanding required to emulate a brain might allow one to more quickly create an AI on similar principles.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Tool AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that is not ‘like an agent’, but like a more flexible and capable version of contemporary software. Most notably perhaps, it is not goal-directed (Bostrom 2014, p151).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== U ====
+ 
+ 
+ === Utility function ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A mapping from states of the world to real numbers (‘utilities’), describing an entity’s degree of preference for different states of the world. Given the choice between two lotteries, the entity prefers the lottery with the highest ‘expected utility’, which is to say, sum of utilities of possible states weighted by the probability of those states occurring.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== V ====
+ 
+ 
+ === Value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the value loading problem in which the AI learns the values that humans want it to pursue (Bostrom 2014, p207).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Value loading problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The problem of causing the AI to pursue human values (Bostrom 2014, p185).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== W ====
+ 
+ 
+ === Wise-Singleton Sustainability Threshold ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it face no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe (Bostrom 2014, p100).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Whole-brain emulation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Machine intelligence created by copying the computational structure of the human brain.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Word embedding ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A mapping of words to high-dimensional vectors that has been trained to be useful in a word task such that the arrangement of words in the vector space is meaningful. For instance, words near one other in the vector-space are related, and similar relationships between different pairs of words correspond to similar vectors between them, so that e.g. if E(x) is the vector for the word ‘x’, then E(king) – E(queen) ≈ E(woman) – E(man). Word embeddings are explained in more detail &amp;lt;a href=&amp;quot;https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Notes =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Bostrom, Nick. &amp;lt;em&amp;gt;Superintelligence: Paths, Dangers, Strategies&amp;lt;/em&amp;gt;. 1st edition. Oxford: Oxford University Press, 2014.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-358&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. &amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/&amp;quot;&amp;gt;http://neuralnetworksanddeeplearning.com&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-358&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,649 @@
+ ====== Glossary of AI Risk Terminology and common AI terms ======
+ 
+ // Published 30 October, 2015; last updated 21 January, 2022 //
+ 
+ 
+ ===== Terms =====
+ 
+ 
+ ==== A ====
+ 
+ 
+ === AI timeline ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An expectation about how much time will lapse before important AI events, especially the advent of &amp;lt;em&amp;gt;&amp;lt;a href=&amp;quot;/doku.php?id=clarifying_concepts:human-level_ai&amp;quot;&amp;gt;human-level AI&amp;lt;/a&amp;gt;&amp;lt;/em&amp;gt; or a similar milestone. The term can also refer to the actual periods of time (which are not yet known), rather than an expectation about them.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Artificial General Intelligence (also, AGI) ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Skill at performing intellectual tasks across at least the range of variety that a human being is capable of. As opposed to skill at certain specific tasks (‘narrow’ AI). That is, synonymous with the more ambiguous &amp;lt;em&amp;gt;Human-Level AI&amp;lt;/em&amp;gt; for some meanings of the latter&amp;lt;em&amp;gt;.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Artificial Intelligence (also, AI) ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Behavior characteristic of human minds exhibited by man-made machines, and also the area of research focused on developing machines with such behavior. Sometimes used informally to refer to &amp;lt;em&amp;gt;human-level AI&amp;lt;/em&amp;gt; or another strong form of AI not yet developed.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Associative value accretion ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the AI acquires values using some machinery for synthesizing appropriate new values as it interacts with its environment, inspired by the way humans appear to acquire values (Bostrom 2014, p189-190)&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-358&amp;quot; title=&amp;quot;Bostrom, Nick. &amp;amp;lt;em&amp;amp;gt;Superintelligence: Paths, Dangers, Strategies&amp;amp;lt;/em&amp;amp;gt;. 1st edition. Oxford: Oxford University Press, 2014.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Anthropic capture ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized control method in which the AI thinks it might be in a simulation, and so tries to behave in ways that will be rewarded by its simulators (Bostrom 2014 p134).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Anthropic reasoning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Reaching beliefs (posterior probabilities) over states of the world and your location in it, from priors over possible physical worlds (without your location specified) and evidence about your own situation. For an example where this is controversial, see &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Sleeping_Beauty_problem&amp;quot;&amp;gt;The Sleeping Beauty Problem&amp;lt;/a&amp;gt;. For more on the topic and its relation to AI, see &amp;lt;a href=&amp;quot;https://meteuphoric.wordpress.com/anthropic-principles/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Augmentation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to obtaining a superintelligence with desirable motives that consists of beginning with a creature with desirable motives (eg, a human), then making it smarter, instead of designing good motives from scratch (Bostrom 2014, p142).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== B ====
+ 
+ 
+ === Backpropagation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A fast method of computing the derivative of cost with respect to different parameters in a network, allowing for training neural nets through gradient descent. See &amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/chap2.html&amp;quot;&amp;gt;Neural Networks and Deep Learning&amp;lt;/a&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-358&amp;quot; title=&amp;#039;Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. &amp;amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/&amp;quot;&amp;amp;gt;http://neuralnetworksanddeeplearning.com&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; for a full explanation.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Boxing ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A control method that consists of constructing the AI’s environment so as to minimize interaction between the AI and the outside world. (Bostrom 2014, p129).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== C ====
+ 
+ 
+ === Capability control methods ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategies for avoiding undesirable outcomes by limiting what an AI can do (Bostrom 2014, p129).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Cognitive enhancement ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Improvements to an agent’s mental abilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Collective superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system composed of a large number of smaller intellects such that the system’s overall performance across many very general domains vastly outstrips that of any current cognitive system” (Bostrom 2014, p54).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Computation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A sequence of mechanical operations intended to shed light on something other than this mechanical process itself, through an established relationship between the process and the object of interest.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === The common good principle ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ethical ideals” (Bostrom 2014, p254).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Crucial consideration ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An idea with the potential to change our views substantially, such as by reversing the sign of the desirability of important interventions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== D ====
+ 
+ 
+ === Decisive strategic advantage ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategic superiority (by technology or other means) sufficient to enable an agent to unilaterally control most of the resources of the universe.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Direct specification ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which the programmers figure out what humans value, and code it into the AI (Bostrom 2014, p139-40).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Domesticity ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which the AI is given goals that limit the range of things it wants to interfere with (Bostrom 2014, p140-1).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== E ====
+ 
+ 
+ === Emulation modulation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Starting with brain emulations with approximately normal human motivations (see ‘Augmentation’), and modifying their motivations using drugs or digital drug analogs.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Evolutionary selection approach to value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to the value learning problem which obtains an AI with desirable values by iterative selection, the same way evolutionary selection produced humans  (Bostrom 2014, p187-8).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Existential risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Risk of an adverse outcome that would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential &amp;lt;a href=&amp;quot;http://www.nickbostrom.com/existential/risks.html&amp;quot;&amp;gt;(Bostrom 2002)&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== F ====
+ 
+ 
+ === Feature ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A dimension in the vector space of activations in a single layer of a neural network (i.e. a neuron activation or linear combination of activations of different neurons)&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === First principal-agent problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The well-known problem faced by a sponsor wanting an employee to fulfill their wishes (usually called ‘the principal agent problem’).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== G ====
+ 
+ 
+ === Genie ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that carries out a high level command, then waits for another (Bostrom 2014, p148).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== H ====
+ 
+ 
+ === Hardware overhang ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A situation where large amounts of hardware being used for other purposes become available for AI, usually posited to occur when AI reaches human-level capabilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that matches human capabilities in virtually every domain of interest.  Note that this term is used ambiguously; see &amp;lt;a href=&amp;quot;/doku.php?id=clarifying_concepts:human-level_ai&amp;quot;&amp;gt;our page on human-level AI&amp;lt;/a&amp;gt;.  &amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level hardware ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Hardware that matches the information-processing ability of the human brain.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Human-level software ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Software that matches the algorithmic efficiency of the human brain, for doing the tasks the human brain does.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== I ====
+ 
+ 
+ === Impersonal perspective ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The view that one should act in the best interests of everyone, including those who may be brought into existence by one’s choices (see Person-affecting perspective).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Incentive methods ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Strategies for controlling an AI that consist of setting up the AI’s environment such that it is in the AI’s interest to cooperate. e.g. a social environment with punishment or social repercussions often achieves this for contemporary agents (Bostrom 2014, p131).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Incentive wrapping ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Provisions in the goals given to an AI that allocate extra rewards to those who helped bring the AI about  (Bostrom 2014, p222-3).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Indirect normativity ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the control problem in which we specify a way to specify what we value, instead of specifying what we value directly (Bostrom, p141-2).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Instrumental convergence thesis ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We can identify ‘convergent instrumental values’. That is, subgoals that are useful for a wide range of more fundamental goals, and in a wide range of situations (Bostrom 2014, p109).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Intelligence explosion ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized event in which an AI rapidly improves from ‘relatively modest’ to superhuman level (usually imagined to be as a result of recursive self-improvement).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== M ====
+ 
+ 
+ === Macrostructural development accelerator ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An imagined lever used in thought experiments which slows the large scale features of history (e.g. technological change, geopolitical dynamics) while leaving the small scale features the same.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Mind crime ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The mistreatment of morally relevant computations.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Moore’s Law ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Any of several different consistent, many-decade patterns of exponential improvement that have been observed in digital technologies. The classic version concerns the number of transistors in a dense integrated circuit, which was observed to be doubling around every year when the ‘law’ was formulated in &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Moore%27s_law&amp;quot;&amp;gt;1965&amp;lt;/a&amp;gt;. &amp;lt;a href=&amp;quot;/doku.php?id=featured_articles:glossary_of_ai_risk_terminology_and_common_ai_terms#Price-Performance_Moores_Law&amp;quot;&amp;gt;Price-Performance Moore’s Law&amp;lt;/a&amp;gt; is often relevant to AI forecasting.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Moral rightness (MR) AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI which seeks to do what is morally right.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Motivational scaffolding ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the seed AI is given simple goals, and these goals are replaced with more complex ones once it has developed sufficiently sophisticated representational structure (Bostrom 2014, p191-192).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Multipolar outcome ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A situation after the arrival of superintelligence in which no single agent controls most of the resources.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== O ====
+ 
+ 
+ === Optimization power ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The strength of a process’s ability to improve systems.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Oracle ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that only answers questions (Bostrom 2014, p145).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Orthogonality thesis ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Intelligence and final goals are orthogonal: more or less any level of intelligence could in principle be combined with more or less any final goal.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== P ====
+ 
+ 
+ === Person-affecting perspective ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The view that one should act in the best interests of everyone who already exists, or who will exist independent of one’s choices (see Impersonal perspective).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Perverse instantiation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A solution to a posed goal (eg, make humans smile) that is destructive in unforeseen ways (eg, paralyzing face muscles in the smiling position).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Price-Performance Moore’s Law ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:trends_in_the_cost_of_computing&amp;quot;&amp;gt;observed pattern&amp;lt;/a&amp;gt; of relatively consistent, long term, exponential price decline for computation.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Principle of differential technological development ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Retard the development of dangerous and harmful technologies, especially the ones that raise the level of existential risk; and accelerate the development of beneficial technologies, especially those that reduce the existential risk posed by nature or by other technologies” (Bostrom 2014, p230).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Principle of epistemic deference ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than our to be true.  We should therefore defer to the superintelligence’s position whenever feasible” (Bostrom 2014, p226).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Q ====
+ 
+ 
+ === Quality superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system that is at least as fast as a human mind and vastly qualitatively smarter” (Bostrom 2014, p56).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== R ====
+ 
+ 
+ === Recalcitrance ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;How difficult a system is to improve.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Recursive self-improvement ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The envisaged process of AI (perhaps a seed AI) iteratively improving itself.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Reinforcement learning approach to value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A hypothesized approach to value learning in which the AI is rewarded for behaviors that more closely approximate human values (Bostrom 2014, p188-9).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== S ====
+ 
+ 
+ === Second principal-agent problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The emerging problem of a developer wanting their AI to fulfill their wishes.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Seed AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A modest AI which can bootstrap into an impressive AI by improving its own architecture.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Singleton ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An agent that is internally coordinated and has no opponents.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Sovereign ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that acts autonomously in the world, in pursuit of potentially long range objectives (Bostrom 2014, p148).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Speed superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“A system that can do all that a human intellect can do, but much faster” (Bostrom 2014, p53).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === State risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A risk that comes from being in a certain state, such that the amount of risk is a function of the time spent there. For example, the state of not having the technology to defend from asteroid impacts carries risk proportional to the time we spend in it.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Step risk ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A risk that comes from making a transition. Here the amount of risk is not a simple function of how long the transition takes.  For example, traversing a minefield is not safer if done more quickly.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Stunting ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A control method that consists of limiting the AI’s capabilities, for instance as by limiting the AI’s access to information (Bostrom 2014, p135).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Superintelligence ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest (Bostrom 2014, p22).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== T ====
+ 
+ 
+ === Takeoff ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The event of the emergence of a superintelligence, often characterized by its speed: ‘slow takeoff’ takes decades or centuries, ‘moderate takeoff’ takes months or years and ‘fast takeoff’ takes minutes to days.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Technological completion conjecture ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;If scientific and technological development efforts do not cease, then all important basic capabilities that could be obtained through some possible technology will be obtained (Bostrom 2014, p127).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Technology coupling ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A predictable timing relationship between two technologies, such that hastening of the first technology will hasten the second, either because the second is a precursor or because it is a natural consequence (Bostrom 2014, p236-8) e.g. brain emulation is plausibly coupled to ‘neuromorphic’ AI, because the understanding required to emulate a brain might allow one to more quickly create an AI on similar principles.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Tool AI ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An AI that is not ‘like an agent’, but like a more flexible and capable version of contemporary software. Most notably perhaps, it is not goal-directed (Bostrom 2014, p151).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== U ====
+ 
+ 
+ === Utility function ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A mapping from states of the world to real numbers (‘utilities’), describing an entity’s degree of preference for different states of the world. Given the choice between two lotteries, the entity prefers the lottery with the highest ‘expected utility’, which is to say, sum of utilities of possible states weighted by the probability of those states occurring.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== V ====
+ 
+ 
+ === Value learning ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;An approach to the value loading problem in which the AI learns the values that humans want it to pursue (Bostrom 2014, p207).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Value loading problem ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The problem of causing the AI to pursue human values (Bostrom 2014, p185).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== W ====
+ 
+ 
+ === Wise-Singleton Sustainability Threshold ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A capability set exceeds the wise-singleton threshold if and only if a patient and existential risk-savvy system with that capability set would, if it face no intelligent opposition or competition, be able to colonize and re-engineer a large part of the accessible universe (Bostrom 2014, p100).&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Whole-brain emulation ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Machine intelligence created by copying the computational structure of the human brain.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Word embedding ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A mapping of words to high-dimensional vectors that has been trained to be useful in a word task such that the arrangement of words in the vector space is meaningful. For instance, words near one other in the vector-space are related, and similar relationships between different pairs of words correspond to similar vectors between them, so that e.g. if E(x) is the vector for the word ‘x’, then E(king) – E(queen) ≈ E(woman) – E(man). Word embeddings are explained in more detail &amp;lt;a href=&amp;quot;https://colah.github.io/posts/2014-07-NLP-RNNs-Representations/&amp;quot;&amp;gt;here&amp;lt;/a&amp;gt;.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Notes =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Bostrom, Nick. &amp;lt;em&amp;gt;Superintelligence: Paths, Dangers, Strategies&amp;lt;/em&amp;gt;. 1st edition. Oxford: Oxford University Press, 2014.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-358&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-358&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Nielsen, Michael A. “Neural Networks and Deep Learning,” 2015. &amp;lt;a href=&amp;quot;http://neuralnetworksanddeeplearning.com/&amp;quot;&amp;gt;http://neuralnetworksanddeeplearning.com&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-358&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Likelihood of discontinuous progress around the development of AGI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/likelihood_of_discontinuous_progress_around_the_development_of_agi?rev=1695232523&amp;do=diff"/>
        <published>2023-09-20T17:55:23+00:00</published>
        <updated>2023-09-20T17:55:23+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/likelihood_of_discontinuous_progress_around_the_development_of_agi?rev=1695232523&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -2,9 +2,9 @@
  
  // Published 23 February, 2018; last updated 26 May, 2020 //
  
  &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;We aren’t convinced by any of the arguments we’ve seen to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question.&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;We do not know of seemingly compelling arguments to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question.&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;
  
  
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -2,9 +2,9 @@
  
  // Published 23 February, 2018; last updated 26 May, 2020 //
  
  &amp;lt;HTML&amp;gt;
- &amp;lt;p&amp;gt;We aren’t convinced by any of the arguments we’ve seen to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question.&amp;lt;/p&amp;gt;
+ &amp;lt;p&amp;gt;We do not know of seemingly compelling arguments to expect large discontinuity in AI progress above the extremely low base rate for all technologies. However this topic is controversial, and many thinkers on the topic disagree with us, so we consider this an open question.&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;
  
  
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>List of multipolar research projects</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/list_of_multipolar_research_projects?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/list_of_multipolar_research_projects?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,268 @@
+ ====== List of multipolar research projects ======
+ 
+ // Published 11 February, 2015; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This list currently consists of research projects suggested at the &amp;lt;a href=&amp;quot;http://aiimpacts.org/event-multipolar-ai-workshop-with-robin-hanson/&amp;quot; title=&amp;quot;Event: Multipolar AI workshop with Robin Hanson&amp;quot;&amp;gt;Multipolar AI workshop&amp;lt;/a&amp;gt; we held on January 26 2015.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Relatively concrete projects are marked [concrete]. These are more likely to already include specific questions to answer and feasible methods to answer them with. Other ‘projects’ are more like open questions, or broad directions for inquiry.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Projects are divided into three sections:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Paths to multipolar scenarios&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What would happen in a multipolar scenario?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Safety in a multipolar scenario&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Order is not otherwise relevant. The list is an inclusive collection of the topics suggested at the workshop, rather than a prioritized selection from a larger list.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Luke Muehlhauser’s &amp;lt;a href=&amp;quot;http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/&amp;quot;&amp;gt;list of ‘superintelligence strategy’ research questions&amp;lt;/a&amp;gt; contains further suggestions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== List =====
+ 
+ 
+ ==== Paths to multipolar scenarios ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.1 If we assume that AI software is similar to other software, what can we infer from observing contemporary software development? [concrete] &amp;lt;/strong&amp;gt;For instance, is progress in software performance generally smooth or jumpy? What is the distribution? What are typical degrees of concentration among developers? What are typical modes of competition? How far ahead does the leading team tend to be to their competitors? How often does the lead change? How much does a lead in a subsystem produce a lead overall? How much do non-software factors influence who has the lead? How likely is a large player like Google—with its pre-existing infrastructure—to be the frontrunner in a random new area that they decide to compete in?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A large part of this project would be collecting what is known about contemporary software development. This information would provide one view on how AI progress might plausibly unfold. Combined with several such views, this might inform predictions on issues like abruptness, competition and involved players.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2 If the military is involved in AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.1, and would similarly involve a large component of reviewing the nature of contemporary military projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3 If industry were to be largely responsible for AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary industrial projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.4&amp;lt;/strong&amp;gt; &amp;lt;strong&amp;gt;If academia were to be largely responsible for AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary academic projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.5 Survey AI experts on the likelihood of AI emerging in the military, business or academia, and on the likely size of a successful AI project.  [concrete]&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.6 Identify considerations that might tip us between multipolar and unipolar scenarios. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.7 To what extent will AGI progress be driven by developing significantly new ideas? &amp;lt;/strong&amp;gt;1.1 may bear on this. It could be approached in other ways, for instance asking AI researchers what they expect.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.8 Run prediction markets on near-term questions, such as rates of AI progress, which inform our long-run expectations. &amp;lt;/strong&amp;gt;[concrete] &amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.9 Collect past records of ‘lumpiness’ of AI success. [concrete]&amp;lt;/strong&amp;gt; That is, variation in progress over time. This would inform expectations of future lumpiness, and thus potential for single projects to gain a substantial advantage.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== What would happen in a multipolar scenario? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.1 To what extent do values prevalent in the near-term affect the long run, in a competitive scenario? &amp;lt;/strong&amp;gt;One could consider the role of values over history so far, or examine the ways in which the role of values may change in the future. One could consider the degree of instrumental convergence between actors (e.g. firms) today, and ask how that affects long-term outcomes. One might also consider whether non-values mental features might become locked in in a way that produces similar outcomes to particular values being influential. e.g. priors or epistemological methods that make a particular religion more likely&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.2 What other factors in an initial scenario are likely to have long-lasting effects?&amp;lt;/strong&amp;gt; For instance social institutions, standards, and locations for cities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.3 What would AI’s value in a multipolar scenario? &amp;lt;/strong&amp;gt;We can consider a range of factors that might influence AI values:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The nature of the transition to AI&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Prevailing institutions&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The extentto which AI values become static, as compared to changing human values&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What values do humans want AI’s to have&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Competitive dynamics&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;There is a common view that a multipolar scenario would be better in the long run than a hegemonic ‘unfriendly AI’. This project would inform that comparison.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.4 What are the prospects for human capital-holders? &amp;lt;/strong&amp;gt;In a simple model, humans who own capital might become very wealthy during a transition to AI. On a classical economic picture, this would be a critical way for humans to influence the future. Is this picture plausible? Evaluate the considerations.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What are the implications of capital holders doing no intellectual work themselves?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; What does the existing literature on principal-agent problems suggest about multipolar AI scenarios?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; Could humans maintain investments for significant periods of their lives, if during that time aeons of subjective time passes for faster moving populations? (i.e. is it plausible to expect to hold assets through millions of years of human history?) Investigate this via data on past expropriations&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.5 Identify risks distinctive to a multipolar scenario, or which are much more serious in a multipolar scenario. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;For instance:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Evolutionary dynamics bring an outcome that nobody desired initially&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The AIs are not well integrated into human society, and consequently cause or allow destruction to human society&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The AIs—integrated or not—have different values, and most of the resources end up being devoted to those values&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.6 Choose a specific multipolar scenario and try to predict its features in detail. [concrete] &amp;lt;/strong&amp;gt;Base this on the basic changes we know would occur (e.g. minds could be copied like software), and our best understanding of social science.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Specific instances:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Brain emulations (Robin Hanson is working on this in an upcoming book)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Brain emulations, without the assumption that software minds are opaque&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;One can buy maximally efficient software for anything you want; everything else is the same&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;AI is much like contemporary software (see 1.1).&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.7 How would multipolar AI change the nature and severity of violent conflict?&amp;lt;/strong&amp;gt; For instance, conflict between states.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.8 Investigate the potential for AI-enforced rights. &amp;lt;/strong&amp;gt;Think about how to enforce property rights in a multipolar scenario, given advanced artificial intelligence to do it with, and the opportunity to prepare ahead of time. &amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;Can you create programs that just enforce deals between two parties, but do nothing else? &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;If you create AI with this stable motivational structure, possessed by many parties, how does this change the way that agents that interact? &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;How could such a system be designed?&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.9 What is the future of democracy in such a scenario? &amp;lt;/strong&amp;gt;In a world where resources can rapidly and cheaply be turned into agents, the existing assignment of a vote per person may be destructive and unstable.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.10 How does the lumpiness of economic outcomes vary as a function of the lumpiness of origins?&amp;lt;/strong&amp;gt; For instance, if one team creates brain emulations years before others, would that group have and retain extreme influence?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.11 What externalities can we foresee, in computer security? &amp;lt;/strong&amp;gt;That is, will people invest less (or more) in security than is socially optimal?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.12 What externalities can we foresee in AI safety generally?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.13 To what extent can artificial agents make more effective commitments, or more effectively monitor commitments, than humans?&amp;lt;/strong&amp;gt; How does this change competitive dynamics? What proofs of properties of one’s source code may be available in the future?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Safety in a multipolar scenario ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.1 Assess the applicability of general AI safety insights to multipolar scenarios. [concrete]&amp;lt;/strong&amp;gt; How useful are capability control methods, such as boxing, stunting, incentives, or tripwires in a multi-polar scenario? How useful are motivation selection methods, such as direct specification, domesticity, indirect normatively, augmentation in a multipolar scenario?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.2 Would selective pressures strongly favor the existence of goal-directed agents, in a multipolar scenario where a variety of AI designs are feasible?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.3 Develop a good model for the existing computer security phenomenon where nobody builds secure systems, though they can.&amp;lt;/strong&amp;gt; &amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; Model the long-run costs of secure and insecure systems, given distributions of attacker sophistication and possibility for incremental system improvement. Determine the likely situation various future scenarios, especially where computer security is particularly important.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.4 Do paradigms developed for nuclear security and biological weapons apply to AI in a multi-polar scenario? [concrete]&amp;lt;/strong&amp;gt; For instance, could similar control and detection systems be used?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.5 What do the features of computer security systems tell us about how multipolar agents might compete?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.8 What policies could help create more secure computer systems? &amp;lt;/strong&amp;gt;For instance, the onus being on owners of systems to secure them, rather than on potential attackers to avoid attacking.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.9 What innovations (either in AI or coinciding technologies) might reduce principal-agent problems?&amp;lt;/strong&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt; &amp;lt;/span&amp;gt;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.10 Apply ‘reliability theory’ to the problem of manufacturing trustworthy hardware. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.11 How can we transition in an economically viable way to hardware that we can trust is uncorrupted? &amp;lt;/strong&amp;gt;At present, we must assume that the hardware is uncorrupted upon purchase, but this may not be sufficient in the long run.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,268 @@
+ ====== List of multipolar research projects ======
+ 
+ // Published 11 February, 2015; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This list currently consists of research projects suggested at the &amp;lt;a href=&amp;quot;http://aiimpacts.org/event-multipolar-ai-workshop-with-robin-hanson/&amp;quot; title=&amp;quot;Event: Multipolar AI workshop with Robin Hanson&amp;quot;&amp;gt;Multipolar AI workshop&amp;lt;/a&amp;gt; we held on January 26 2015.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Relatively concrete projects are marked [concrete]. These are more likely to already include specific questions to answer and feasible methods to answer them with. Other ‘projects’ are more like open questions, or broad directions for inquiry.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Projects are divided into three sections:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Paths to multipolar scenarios&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What would happen in a multipolar scenario?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Safety in a multipolar scenario&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Order is not otherwise relevant. The list is an inclusive collection of the topics suggested at the workshop, rather than a prioritized selection from a larger list.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Luke Muehlhauser’s &amp;lt;a href=&amp;quot;http://lukemuehlhauser.com/some-studies-which-could-improve-our-strategic-picture-of-superintelligence/&amp;quot;&amp;gt;list of ‘superintelligence strategy’ research questions&amp;lt;/a&amp;gt; contains further suggestions.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== List =====
+ 
+ 
+ ==== Paths to multipolar scenarios ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.1 If we assume that AI software is similar to other software, what can we infer from observing contemporary software development? [concrete] &amp;lt;/strong&amp;gt;For instance, is progress in software performance generally smooth or jumpy? What is the distribution? What are typical degrees of concentration among developers? What are typical modes of competition? How far ahead does the leading team tend to be to their competitors? How often does the lead change? How much does a lead in a subsystem produce a lead overall? How much do non-software factors influence who has the lead? How likely is a large player like Google—with its pre-existing infrastructure—to be the frontrunner in a random new area that they decide to compete in?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;A large part of this project would be collecting what is known about contemporary software development. This information would provide one view on how AI progress might plausibly unfold. Combined with several such views, this might inform predictions on issues like abruptness, competition and involved players.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2 If the military is involved in AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.1, and would similarly involve a large component of reviewing the nature of contemporary military projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3 If industry were to be largely responsible for AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary industrial projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.4&amp;lt;/strong&amp;gt; &amp;lt;strong&amp;gt;If academia were to be largely responsible for AI development, how would that affect our predictions? [concrete] &amp;lt;/strong&amp;gt;This is a variation on 1.2, and would similarly involve a large component of reviewing the nature of contemporary academic projects.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.5 Survey AI experts on the likelihood of AI emerging in the military, business or academia, and on the likely size of a successful AI project.  [concrete]&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.6 Identify considerations that might tip us between multipolar and unipolar scenarios. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.7 To what extent will AGI progress be driven by developing significantly new ideas? &amp;lt;/strong&amp;gt;1.1 may bear on this. It could be approached in other ways, for instance asking AI researchers what they expect.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.8 Run prediction markets on near-term questions, such as rates of AI progress, which inform our long-run expectations. &amp;lt;/strong&amp;gt;[concrete] &amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.9 Collect past records of ‘lumpiness’ of AI success. [concrete]&amp;lt;/strong&amp;gt; That is, variation in progress over time. This would inform expectations of future lumpiness, and thus potential for single projects to gain a substantial advantage.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== What would happen in a multipolar scenario? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.1 To what extent do values prevalent in the near-term affect the long run, in a competitive scenario? &amp;lt;/strong&amp;gt;One could consider the role of values over history so far, or examine the ways in which the role of values may change in the future. One could consider the degree of instrumental convergence between actors (e.g. firms) today, and ask how that affects long-term outcomes. One might also consider whether non-values mental features might become locked in in a way that produces similar outcomes to particular values being influential. e.g. priors or epistemological methods that make a particular religion more likely&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.2 What other factors in an initial scenario are likely to have long-lasting effects?&amp;lt;/strong&amp;gt; For instance social institutions, standards, and locations for cities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.3 What would AI’s value in a multipolar scenario? &amp;lt;/strong&amp;gt;We can consider a range of factors that might influence AI values:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The nature of the transition to AI&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Prevailing institutions&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The extentto which AI values become static, as compared to changing human values&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What values do humans want AI’s to have&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Competitive dynamics&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;There is a common view that a multipolar scenario would be better in the long run than a hegemonic ‘unfriendly AI’. This project would inform that comparison.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.4 What are the prospects for human capital-holders? &amp;lt;/strong&amp;gt;In a simple model, humans who own capital might become very wealthy during a transition to AI. On a classical economic picture, this would be a critical way for humans to influence the future. Is this picture plausible? Evaluate the considerations.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;What are the implications of capital holders doing no intellectual work themselves?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; What does the existing literature on principal-agent problems suggest about multipolar AI scenarios?&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; Could humans maintain investments for significant periods of their lives, if during that time aeons of subjective time passes for faster moving populations? (i.e. is it plausible to expect to hold assets through millions of years of human history?) Investigate this via data on past expropriations&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.5 Identify risks distinctive to a multipolar scenario, or which are much more serious in a multipolar scenario. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;For instance:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Evolutionary dynamics bring an outcome that nobody desired initially&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The AIs are not well integrated into human society, and consequently cause or allow destruction to human society&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;The AIs—integrated or not—have different values, and most of the resources end up being devoted to those values&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.6 Choose a specific multipolar scenario and try to predict its features in detail. [concrete] &amp;lt;/strong&amp;gt;Base this on the basic changes we know would occur (e.g. minds could be copied like software), and our best understanding of social science.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Specific instances:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Brain emulations (Robin Hanson is working on this in an upcoming book)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Brain emulations, without the assumption that software minds are opaque&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;One can buy maximally efficient software for anything you want; everything else is the same&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;AI is much like contemporary software (see 1.1).&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.7 How would multipolar AI change the nature and severity of violent conflict?&amp;lt;/strong&amp;gt; For instance, conflict between states.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.8 Investigate the potential for AI-enforced rights. &amp;lt;/strong&amp;gt;Think about how to enforce property rights in a multipolar scenario, given advanced artificial intelligence to do it with, and the opportunity to prepare ahead of time. &amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;Can you create programs that just enforce deals between two parties, but do nothing else? &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;If you create AI with this stable motivational structure, possessed by many parties, how does this change the way that agents that interact? &amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt;How could such a system be designed?&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.9 What is the future of democracy in such a scenario? &amp;lt;/strong&amp;gt;In a world where resources can rapidly and cheaply be turned into agents, the existing assignment of a vote per person may be destructive and unstable.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.10 How does the lumpiness of economic outcomes vary as a function of the lumpiness of origins?&amp;lt;/strong&amp;gt; For instance, if one team creates brain emulations years before others, would that group have and retain extreme influence?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.11 What externalities can we foresee, in computer security? &amp;lt;/strong&amp;gt;That is, will people invest less (or more) in security than is socially optimal?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.12 What externalities can we foresee in AI safety generally?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;2.13 To what extent can artificial agents make more effective commitments, or more effectively monitor commitments, than humans?&amp;lt;/strong&amp;gt; How does this change competitive dynamics? What proofs of properties of one’s source code may be available in the future?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Safety in a multipolar scenario ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.1 Assess the applicability of general AI safety insights to multipolar scenarios. [concrete]&amp;lt;/strong&amp;gt; How useful are capability control methods, such as boxing, stunting, incentives, or tripwires in a multi-polar scenario? How useful are motivation selection methods, such as direct specification, domesticity, indirect normatively, augmentation in a multipolar scenario?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.2 Would selective pressures strongly favor the existence of goal-directed agents, in a multipolar scenario where a variety of AI designs are feasible?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.3 Develop a good model for the existing computer security phenomenon where nobody builds secure systems, though they can.&amp;lt;/strong&amp;gt; &amp;lt;strong&amp;gt;[concrete]&amp;lt;/strong&amp;gt; Model the long-run costs of secure and insecure systems, given distributions of attacker sophistication and possibility for incremental system improvement. Determine the likely situation various future scenarios, especially where computer security is particularly important.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.4 Do paradigms developed for nuclear security and biological weapons apply to AI in a multi-polar scenario? [concrete]&amp;lt;/strong&amp;gt; For instance, could similar control and detection systems be used?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.5 What do the features of computer security systems tell us about how multipolar agents might compete?&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.8 What policies could help create more secure computer systems? &amp;lt;/strong&amp;gt;For instance, the onus being on owners of systems to secure them, rather than on potential attackers to avoid attacking.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.9 What innovations (either in AI or coinciding technologies) might reduce principal-agent problems?&amp;lt;/strong&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;span style=&amp;quot;color: #444444; line-height: 1.7;&amp;quot;&amp;gt; &amp;lt;/span&amp;gt;&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.10 Apply ‘reliability theory’ to the problem of manufacturing trustworthy hardware. &amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;3.11 How can we transition in an economically viable way to hardware that we can trust is uncorrupted? &amp;lt;/strong&amp;gt;At present, we must assume that the hardware is uncorrupted upon purchase, but this may not be sufficient in the long run.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Precedents for economic n-year doubling before 4n-year doubling</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/precedents_for_economic_n-year_doubling_before_4n-year_doubling?rev=1695747122&amp;do=diff"/>
        <published>2023-09-26T16:52:02+00:00</published>
        <updated>2023-09-26T16:52:02+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/precedents_for_economic_n-year_doubling_before_4n-year_doubling?rev=1695747122&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -70,9 +70,9 @@
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;ul&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 4,000 BC and 3,000 BC, GWP doubled in 1,000 years, yet it had never before doubled in as few as 4000 years&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 10,000 BC and 4,000 BC, GWP doubled in 6,000 years, yet there is no record of it doubling earlier in as few as 24,000 years. The records at that point are fairly sparse, so this is less clear, but it seems unlikely that there was a doubling in 24,000 years.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2406&amp;quot; title=&amp;quot;Toward the end of the period it took 15,000 years to grow by $0.6Bn, and growth of $1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; This appears to coincide with the beginning of agriculture, in around 9000BC.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2406&amp;quot; title=&amp;#039; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 10,000 BC and 4,000 BC, GWP doubled in 6,000 years, yet there is no record of it doubling earlier in as few as 24,000 years. The records at that point are fairly sparse, so this is less clear, but it seems unlikely that there was a doubling in 24,000 years.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2406&amp;quot; title=&amp;quot;Toward the end of the period it took 15,000 years to grow by \$0.6Bn, and growth of \$1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; This appears to coincide with the beginning of agriculture, in around 9000BC.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2406&amp;quot; title=&amp;#039; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;/ul&amp;gt;
  &amp;lt;/HTML&amp;gt;
  
  
@@ -107,9 +107,9 @@
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;[Note May 13 2020: This sheet is temporarily wrong.]&amp;lt;s&amp;gt;Instances coincide with &amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1Muz2ftyDUUewMTZPxYxeXF-uj6lBKYP-O3-IvdtHhCo/edit?ts=5e95f280#gid=0&amp;amp;amp;range=G:G&amp;quot;&amp;gt;Column G in this spreadsheet&amp;lt;/a&amp;gt; giving a number higher than 4, when &amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1Muz2ftyDUUewMTZPxYxeXF-uj6lBKYP-O3-IvdtHhCo/edit?ts=5e95f280#gid=0&amp;amp;amp;range=E2&amp;quot;&amp;gt;E2&amp;lt;/a&amp;gt; is set to 2.&amp;lt;/s&amp;gt;&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Toward the end of the period it took 15,000 years to grow by $0.6Bn, and growth of $1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Toward the end of the period it took 15,000 years to grow by \$0.6Bn, and growth of \$1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -70,9 +70,9 @@
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;ul&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 4,000 BC and 3,000 BC, GWP doubled in 1,000 years, yet it had never before doubled in as few as 4000 years&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
- &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 10,000 BC and 4,000 BC, GWP doubled in 6,000 years, yet there is no record of it doubling earlier in as few as 24,000 years. The records at that point are fairly sparse, so this is less clear, but it seems unlikely that there was a doubling in 24,000 years.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2406&amp;quot; title=&amp;quot;Toward the end of the period it took 15,000 years to grow by $0.6Bn, and growth of $1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; This appears to coincide with the beginning of agriculture, in around 9000BC.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2406&amp;quot; title=&amp;#039; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Between 10,000 BC and 4,000 BC, GWP doubled in 6,000 years, yet there is no record of it doubling earlier in as few as 24,000 years. The records at that point are fairly sparse, so this is less clear, but it seems unlikely that there was a doubling in 24,000 years.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2406&amp;quot; title=&amp;quot;Toward the end of the period it took 15,000 years to grow by \$0.6Bn, and growth of \$1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;quot;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; This appears to coincide with the beginning of agriculture, in around 9000BC.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2406&amp;quot; title=&amp;#039; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;amp;lt;/a&amp;amp;gt;. &amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;/ul&amp;gt;
  &amp;lt;/HTML&amp;gt;
  
  
@@ -107,9 +107,9 @@
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;[Note May 13 2020: This sheet is temporarily wrong.]&amp;lt;s&amp;gt;Instances coincide with &amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1Muz2ftyDUUewMTZPxYxeXF-uj6lBKYP-O3-IvdtHhCo/edit?ts=5e95f280#gid=0&amp;amp;amp;range=G:G&amp;quot;&amp;gt;Column G in this spreadsheet&amp;lt;/a&amp;gt; giving a number higher than 4, when &amp;lt;a href=&amp;quot;https://docs.google.com/spreadsheets/d/1Muz2ftyDUUewMTZPxYxeXF-uj6lBKYP-O3-IvdtHhCo/edit?ts=5e95f280#gid=0&amp;amp;amp;range=E2&amp;quot;&amp;gt;E2&amp;lt;/a&amp;gt; is set to 2.&amp;lt;/s&amp;gt;&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
- &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Toward the end of the period it took 15,000 years to grow by $0.6Bn, and growth of $1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Toward the end of the period it took 15,000 years to grow by \$0.6Bn, and growth of \$1.8Bn would have been needed for a doubling. So assuming linear growth at the end-of-period rate, this would have taken around 45,000 years, whereas in if growth was speeding up, it should have taken longer.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
  &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-2406&amp;quot;&amp;gt;&amp;lt;/span&amp;gt; Khan Academy. “The Dawn of Agriculture (Article).” Accessed April 14, 2020. &amp;lt;a href=&amp;quot;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;quot;&amp;gt;https://www.khanacademy.org/humanities/world-history/world-history-beginnings/birth-agriculture-neolithic-revolution/a/where-did-agriculture-come-from&amp;lt;/a&amp;gt;. &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-2406&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
  &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Preliminary survey of prescient actions</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/preliminary_survey_of_prescient_actions?rev=1683495439&amp;do=diff"/>
        <published>2023-05-07T21:37:19+00:00</published>
        <updated>2023-05-07T21:37:19+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/preliminary_survey_of_prescient_actions?rev=1683495439&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -311,9 +311,9 @@
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Alexander Fleming&amp;lt;/strong&amp;gt; warned, in his 1945 Nobel Lecture, that widespread access to antibiotics without supervision may lead to antibiotic resistance.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-2362&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-2362&amp;quot; title=&amp;#039;“The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drugmake them resistant.” “Wayback Machine,” March 31, 2018.&amp;amp;lt;a href=&amp;quot;https://web.archive.org/web/20180331001640/https://www.nobelprize.org/nobel_prizes/medicine/laureates/1945/fleming-lecture.pdf&amp;quot;&amp;amp;gt; https://web.archive.org/web/20180331001640/https://www.nobelprize.org/nobel_prizes/medicine/laureates/1945/fleming-lecture.pdf&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We are uncertain of the impact of Fleming’s warning, whether he took additional action to mitigate the risk, or how widespread within the scientific community such concerns were, but our impression is that it was not a widely known issue, that his was an early warning, and that his judgement was generally taken seriously by the time of his speech. His warning preceded the first documented cases of penicillin-resistant bacteria by more than 20 years, and the threat of antimicrobial resistance seems to be broadly analogous with AI risk on most of our criteria, though it does seem that feedback was available throughout efforts to reduce the threat.&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;
- 
+ //Update: see our [[https://aiimpacts.org/wp-content/uploads/2023/04/Alexander_Fleming__antibiotic_resistance__and_relevant_lessons_for_the_mitigation_of_risk_from_advanced_artificial_intelligence.pdf|full report]] about Alexander Fleming.//
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;The Treaty on the Non-Proliferation of Nuclear Weapons&amp;lt;/strong&amp;gt; required many actions from many actors, but it seems to have required a complex prediction about technological development and geopolitics to address a severe threat, was specific to a particular threat, and had limited opportunities for feedback. We are uncertain if any of the specific actions will prove to be prescient on further investigation, but it seems promising.&amp;lt;br/&amp;gt;&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -311,9 +311,9 @@
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Alexander Fleming&amp;lt;/strong&amp;gt; warned, in his 1945 Nobel Lecture, that widespread access to antibiotics without supervision may lead to antibiotic resistance.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-2362&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-2362&amp;quot; title=&amp;#039;“The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drugmake them resistant.” “Wayback Machine,” March 31, 2018.&amp;amp;lt;a href=&amp;quot;https://web.archive.org/web/20180331001640/https://www.nobelprize.org/nobel_prizes/medicine/laureates/1945/fleming-lecture.pdf&amp;quot;&amp;amp;gt; https://web.archive.org/web/20180331001640/https://www.nobelprize.org/nobel_prizes/medicine/laureates/1945/fleming-lecture.pdf&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We are uncertain of the impact of Fleming’s warning, whether he took additional action to mitigate the risk, or how widespread within the scientific community such concerns were, but our impression is that it was not a widely known issue, that his was an early warning, and that his judgement was generally taken seriously by the time of his speech. His warning preceded the first documented cases of penicillin-resistant bacteria by more than 20 years, and the threat of antimicrobial resistance seems to be broadly analogous with AI risk on most of our criteria, though it does seem that feedback was available throughout efforts to reduce the threat.&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;
- 
+ //Update: see our [[https://aiimpacts.org/wp-content/uploads/2023/04/Alexander_Fleming__antibiotic_resistance__and_relevant_lessons_for_the_mitigation_of_risk_from_advanced_artificial_intelligence.pdf|full report]] about Alexander Fleming.//
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;The Treaty on the Non-Proliferation of Nuclear Weapons&amp;lt;/strong&amp;gt; required many actions from many actors, but it seems to have required a complex prediction about technological development and geopolitics to address a severe threat, was specific to a particular threat, and had limited opportunities for feedback. We are uncertain if any of the specific actions will prove to be prescient on further investigation, but it seems promising.&amp;lt;br/&amp;gt;&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Research topic: Hardware, software and AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/research_topic_hardware_software_and_ai?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/research_topic_hardware_software_and_ai?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,155 @@
+ ====== Research topic: Hardware, software and AI ======
+ 
+ // Published 19 February, 2015; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This is the first in a sequence of articles outlining research which could help forecast AI development.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ 
+ ===== Interpretation =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Concrete research projects are in boxes. ∑5 ∆8  means we guess the project will take (very) roughly five hours, and we rate its value (very) roughly 8/10.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Most projects could be done to very different degrees of depth, or at very different scales. Our time cost estimates correspond to a size that we would be likely to intend if we were to do the project. Value estimates are merely ordinal indicators of worth, based on our intuitive sense, and unworthy of being taken very seriously.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ 
+ 
+ ===== 1. How does AI progress depend on hardware and software? =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;At a high level, AI improves when people make better software, when they can run it on better hardware, when they gather bigger, better training sets, etc. This makes present-day hardware and software progress a natural place to look for evidence about when advanced AI will arrive. In order to interpret any such data however, it is important to know how these pieces fit together. For instance, is the progress we see now mostly driven by hardware progress, or software progress? Can the same level of performance usually be achieved by widely varying mixtures of hardware and software? Does progress on software depend on progress on hardware?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;It is important to understand the relationship between hardware, software and AI for several reasons. If hardware progress is the main driver of AI progress, then quite different evidence would tell us about AI timelines than if software is the main driver. Thus different research is valuable, and different timelines are likely. Many people base their AI predictions on hardware progress, while others decline to, so it would be broadly useful to know whether one should. We also expect understanding here to be generally useful.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;So we think research in this direction seems valuable. We also think several projects seem tractable. Yet little appears to have been done in this direction. Thus this topic seems a high priority.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.1 How does AI progress depend qualitatively on hardware and software progress? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;For instance, will human-level AI appear when we have both a certain amount of hardware, and certain developments in software? Or can hardware and software substitute for one another? Substitution seems a natural model of the relationship between hardware and software, since anecdotally many tasks can be done by low quality software and lots of hardware, or by high quality software and less hardware. However the extent of this is unclear. This kind of model is also &amp;lt;a href=&amp;quot;http://aiimpacts.org/how-ai-timelines-are-estimated/&amp;quot; title=&amp;quot;How AI timelines are estimated&amp;quot;&amp;gt;not commonly used&amp;lt;/a&amp;gt; in estimating &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:list_of_analyses_of_time_to_human-level_ai&amp;quot; title=&amp;quot;List of Analyses of Time to Human-Level AI&amp;quot;&amp;gt;AI timelines&amp;lt;/a&amp;gt;, so judging whether it should be might be a useful contribution. Having a good model would also bear on the priority of other research directions. As far as we know, this issue has received almost no attention. It seems moderately tractable.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.1.A Evaluate qualitative models of the relationships between hardware, software and AI&amp;lt;/strong&amp;gt; ∑30 ∆5&amp;lt;br/&amp;gt;
+                 One way to approach the question of qualitative relationships is to assume some model, and work on projects such as those in 1.2 that measure quantitative details of the model, then revise the model if the measurements don’t make sense in it. Before that step, we might spend a short time detailing plausible models, and examining empirical and theoretical evidence we might already have, or could cheaply find. If we were going to follow up with empirical research, we would think about what evidence we would expect the research to reveal, given alternative models.&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #e6e6e6;&amp;quot;&amp;gt;~&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
+                 For instance, we find the hardware-software &amp;lt;a href=&amp;quot;http://en.wikipedia.org/wiki/Indifference_curve&amp;quot;&amp;gt;indifference curve&amp;lt;/a&amp;gt; model described briefly above (and outlined better in a &amp;lt;a href=&amp;quot;http://aiimpacts.org/how-ai-timelines-are-estimated/&amp;quot; title=&amp;quot;How AI timelines are estimated&amp;quot;&amp;gt;blog post&amp;lt;/a&amp;gt;) plausible. Here are some ways it might be inadequate, that we might consider in evaluating it:&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;‘Hardware’ and ‘software’ are not sufficiently measurable entities for a ‘level’ of each in some domain to produce a stable level of performance.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Performance depends strongly on other factors, e.g. exactly what kind of hardware and software progress you make, unique details of the software being developed, training data available.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Different problem types, and different performance metrics on them have different kinds of behavior&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;There are ‘indifference curves’ in a sense but they are not sufficiently consistent to be worth reasoning about.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Humanity’s technological progress is not well characterized by an expanding rectangle of feasible hardware and software levels, but more as a complicated region of feasible combinations.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.2 How much do marginal hardware and software improvements alter AI performance? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;As mentioned above, this question is key to determining which other investigations are worthwhile. Naturally, it could also change our timelines substantially. Thus this question seems thus important to resolve. We think the projects here are particularly tractable, though not particularly cheap. For all of these projects, we would probably choose a specific set of benchmarks on particular problems to focus on. We might do multiple of these projects on the same set of benchmarks, to trace a more complete picture.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.A Search for natural experiments combining modern hardware and early software approaches or vice versa. &amp;lt;/strong&amp;gt;∑80 ∆7&amp;lt;br/&amp;gt;
+                 For instance, we might find early projects with very large hardware budgets, or recent projects with &amp;lt;a href=&amp;quot;http://1kchess.an3.es/&amp;quot;&amp;gt;intentionally restricted hardware&amp;lt;/a&amp;gt;. Where these were tested on commonly used benchmarks, we can use them to map out the broad contributions of hardware and software to progress. For instance, if very small chess programs today run better than old chess programs which used similar (but then normal) amounts of hardware, then the difference between them can be attributed to improving software, roughly.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.B Apply a modern understanding of software to early hardware&amp;lt;/strong&amp;gt; ∑2,000 ∆9&amp;lt;br/&amp;gt;
+                 Choose a benchmark problem that people worked on in the past, e.g. in the 1980s. Use a modern understanding of AI to solve the problem again, still using 1980’s hardware. Compare this to how researchers did in the 1980’s. This project requires substantial time from at least one AI researcher. Ideally they would spend a similar amount of effort as the past researchers did, so it may be worth choosing a problem where it is known that an achievable level of effort was applied in the past.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.C Apply early software understanding to modern hardware&amp;lt;/strong&amp;gt; ∑2,000 ∆8&amp;lt;br/&amp;gt;
+                 Using contemporary hardware and a 1970’s or 1980’s understanding of connectionism, observe the extent to which a modern AI researcher (or student) could replicate contemporary performance on benchmark AI problems. This project is relatively expensive, among those we are describing. It requires substantial time from collaborators with a historically accurate minimal understanding of AI. Students may satisfy this role well, if their education is incomplete in the right ways. One might compare to the work of similar students who had also learned about modern methods.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.D Measure marginal effects of hardware and software in existing performance trends &amp;lt;/strong&amp;gt; ∑100 ∆8&amp;lt;br/&amp;gt;
+                 Often the same software can be used with modest changes in hardware, so changes in performance from hardware over small margins can be measured. Improved software is also often written to be run on the same hardware as earlier software, so changes in performance from software alone can be measured over moderate margins. Thus we can often estimate these marginal changes from looking at existing performance measurements.&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #e6e6e6;&amp;quot;&amp;gt;~&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
+                 We can also look at overall progress over time on some applications, and factor out what we know about hardware or software change, assuming it is close to the marginal values measured by the above methods. For instance, we can see how much individual Go programs improve with more hardware, and then we can look at longer term improvements in computer Go, and guess how much of that improvement came from hardware, given our earlier estimate of marginal improvement from hardware. In general these estimates will be less valid over larger distances, as the impact of hardware or software diverge from their marginal impact, and because arbitrary of hardware and software can’t generally be combined without designing the software to make use of the hardware. &amp;lt;a href=&amp;quot;http://intelligence.org/files/AlgorithmicProgress.pdf&amp;quot;&amp;gt;Grace 2013&amp;lt;/a&amp;gt; includes some work this project.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.E Interview AI researchers on the relative importance of hardware and software in driving the progress they have seen.&amp;lt;/strong&amp;gt; ∑20 ∆7&amp;lt;br/&amp;gt;
+                 AI researchers likely have firsthand experience regarding how hardware and software contribute to overall progress within the vicinity of their own work. This project will probably give relatively noisy estimates, but is very cheap compared to others described here. One could just ask for views on this question, and supporting anecdotes, or devise a more structured questionnaire beforehand.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.3 How do hardware and software progress interact? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Do hardware and software progress relatively independently, or for instance do advances in hardware encourage advances in software? This might change how we generally expect software progress to proceed, and what combinations of hardware and software we expect to first produce human-level AI. We are likely to get some information about this from other projects looking at historical performance data e.g. 1.2.D. For instance, if overall progress is generally proportional to hardware progress, even as hardware progress varies, then this would be suggestive. Below are further possibilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.A Find natural experiments &amp;lt;/strong&amp;gt;∑80 ∆4&amp;lt;br/&amp;gt;
+                 Search for performance data from cases where hardware being used for an application was largely constant then shifted upward at some point. Such cases are probably hard to find, and hard to interpret when found. However, a short search for them may be worthwhile.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.B Interview researchers&amp;lt;/strong&amp;gt; ∑20 ∆7&amp;lt;br/&amp;gt;
+                 If hardware tends to affect software research, it is likely that researchers notice this, and can talk about it. This seems a cheap and effective method of learning qualitatively about the topic. This project should probably be combined with 1.2.E.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.C Consider plausible models&amp;lt;/strong&amp;gt; ∑10 ∆5&amp;lt;br/&amp;gt;
+                 This is a short theoretical project that would benefit from being done in concert with 1.3.B (interview researchers), since researchers probably have a relatively good understanding of which models are plausible, and we are likely to ask better questions of them if we have thought about the topic. This project should probably be combined with 1.1.A.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,155 @@
+ ====== Research topic: Hardware, software and AI ======
+ 
+ // Published 19 February, 2015; last updated 10 December, 2020 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This is the first in a sequence of articles outlining research which could help forecast AI development.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ 
+ ===== Interpretation =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Concrete research projects are in boxes. ∑5 ∆8  means we guess the project will take (very) roughly five hours, and we rate its value (very) roughly 8/10.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Most projects could be done to very different degrees of depth, or at very different scales. Our time cost estimates correspond to a size that we would be likely to intend if we were to do the project. Value estimates are merely ordinal indicators of worth, based on our intuitive sense, and unworthy of being taken very seriously.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ 
+ 
+ 
+ ===== 1. How does AI progress depend on hardware and software? =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;At a high level, AI improves when people make better software, when they can run it on better hardware, when they gather bigger, better training sets, etc. This makes present-day hardware and software progress a natural place to look for evidence about when advanced AI will arrive. In order to interpret any such data however, it is important to know how these pieces fit together. For instance, is the progress we see now mostly driven by hardware progress, or software progress? Can the same level of performance usually be achieved by widely varying mixtures of hardware and software? Does progress on software depend on progress on hardware?&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;It is important to understand the relationship between hardware, software and AI for several reasons. If hardware progress is the main driver of AI progress, then quite different evidence would tell us about AI timelines than if software is the main driver. Thus different research is valuable, and different timelines are likely. Many people base their AI predictions on hardware progress, while others decline to, so it would be broadly useful to know whether one should. We also expect understanding here to be generally useful.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;So we think research in this direction seems valuable. We also think several projects seem tractable. Yet little appears to have been done in this direction. Thus this topic seems a high priority.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.1 How does AI progress depend qualitatively on hardware and software progress? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;For instance, will human-level AI appear when we have both a certain amount of hardware, and certain developments in software? Or can hardware and software substitute for one another? Substitution seems a natural model of the relationship between hardware and software, since anecdotally many tasks can be done by low quality software and lots of hardware, or by high quality software and less hardware. However the extent of this is unclear. This kind of model is also &amp;lt;a href=&amp;quot;http://aiimpacts.org/how-ai-timelines-are-estimated/&amp;quot; title=&amp;quot;How AI timelines are estimated&amp;quot;&amp;gt;not commonly used&amp;lt;/a&amp;gt; in estimating &amp;lt;a href=&amp;quot;/doku.php?id=ai_timelines:list_of_analyses_of_time_to_human-level_ai&amp;quot; title=&amp;quot;List of Analyses of Time to Human-Level AI&amp;quot;&amp;gt;AI timelines&amp;lt;/a&amp;gt;, so judging whether it should be might be a useful contribution. Having a good model would also bear on the priority of other research directions. As far as we know, this issue has received almost no attention. It seems moderately tractable.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.1.A Evaluate qualitative models of the relationships between hardware, software and AI&amp;lt;/strong&amp;gt; ∑30 ∆5&amp;lt;br/&amp;gt;
+                 One way to approach the question of qualitative relationships is to assume some model, and work on projects such as those in 1.2 that measure quantitative details of the model, then revise the model if the measurements don’t make sense in it. Before that step, we might spend a short time detailing plausible models, and examining empirical and theoretical evidence we might already have, or could cheaply find. If we were going to follow up with empirical research, we would think about what evidence we would expect the research to reveal, given alternative models.&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #e6e6e6;&amp;quot;&amp;gt;~&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
+                 For instance, we find the hardware-software &amp;lt;a href=&amp;quot;http://en.wikipedia.org/wiki/Indifference_curve&amp;quot;&amp;gt;indifference curve&amp;lt;/a&amp;gt; model described briefly above (and outlined better in a &amp;lt;a href=&amp;quot;http://aiimpacts.org/how-ai-timelines-are-estimated/&amp;quot; title=&amp;quot;How AI timelines are estimated&amp;quot;&amp;gt;blog post&amp;lt;/a&amp;gt;) plausible. Here are some ways it might be inadequate, that we might consider in evaluating it:&amp;lt;/p&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;‘Hardware’ and ‘software’ are not sufficiently measurable entities for a ‘level’ of each in some domain to produce a stable level of performance.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Performance depends strongly on other factors, e.g. exactly what kind of hardware and software progress you make, unique details of the software being developed, training data available.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Different problem types, and different performance metrics on them have different kinds of behavior&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;There are ‘indifference curves’ in a sense but they are not sufficiently consistent to be worth reasoning about.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Humanity’s technological progress is not well characterized by an expanding rectangle of feasible hardware and software levels, but more as a complicated region of feasible combinations.&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.2 How much do marginal hardware and software improvements alter AI performance? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;As mentioned above, this question is key to determining which other investigations are worthwhile. Naturally, it could also change our timelines substantially. Thus this question seems thus important to resolve. We think the projects here are particularly tractable, though not particularly cheap. For all of these projects, we would probably choose a specific set of benchmarks on particular problems to focus on. We might do multiple of these projects on the same set of benchmarks, to trace a more complete picture.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.A Search for natural experiments combining modern hardware and early software approaches or vice versa. &amp;lt;/strong&amp;gt;∑80 ∆7&amp;lt;br/&amp;gt;
+                 For instance, we might find early projects with very large hardware budgets, or recent projects with &amp;lt;a href=&amp;quot;http://1kchess.an3.es/&amp;quot;&amp;gt;intentionally restricted hardware&amp;lt;/a&amp;gt;. Where these were tested on commonly used benchmarks, we can use them to map out the broad contributions of hardware and software to progress. For instance, if very small chess programs today run better than old chess programs which used similar (but then normal) amounts of hardware, then the difference between them can be attributed to improving software, roughly.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.B Apply a modern understanding of software to early hardware&amp;lt;/strong&amp;gt; ∑2,000 ∆9&amp;lt;br/&amp;gt;
+                 Choose a benchmark problem that people worked on in the past, e.g. in the 1980s. Use a modern understanding of AI to solve the problem again, still using 1980’s hardware. Compare this to how researchers did in the 1980’s. This project requires substantial time from at least one AI researcher. Ideally they would spend a similar amount of effort as the past researchers did, so it may be worth choosing a problem where it is known that an achievable level of effort was applied in the past.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.C Apply early software understanding to modern hardware&amp;lt;/strong&amp;gt; ∑2,000 ∆8&amp;lt;br/&amp;gt;
+                 Using contemporary hardware and a 1970’s or 1980’s understanding of connectionism, observe the extent to which a modern AI researcher (or student) could replicate contemporary performance on benchmark AI problems. This project is relatively expensive, among those we are describing. It requires substantial time from collaborators with a historically accurate minimal understanding of AI. Students may satisfy this role well, if their education is incomplete in the right ways. One might compare to the work of similar students who had also learned about modern methods.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.D Measure marginal effects of hardware and software in existing performance trends &amp;lt;/strong&amp;gt; ∑100 ∆8&amp;lt;br/&amp;gt;
+                 Often the same software can be used with modest changes in hardware, so changes in performance from hardware over small margins can be measured. Improved software is also often written to be run on the same hardware as earlier software, so changes in performance from software alone can be measured over moderate margins. Thus we can often estimate these marginal changes from looking at existing performance measurements.&amp;lt;br/&amp;gt;
+ &amp;lt;span style=&amp;quot;color: #e6e6e6;&amp;quot;&amp;gt;~&amp;lt;/span&amp;gt;&amp;lt;br/&amp;gt;
+                 We can also look at overall progress over time on some applications, and factor out what we know about hardware or software change, assuming it is close to the marginal values measured by the above methods. For instance, we can see how much individual Go programs improve with more hardware, and then we can look at longer term improvements in computer Go, and guess how much of that improvement came from hardware, given our earlier estimate of marginal improvement from hardware. In general these estimates will be less valid over larger distances, as the impact of hardware or software diverge from their marginal impact, and because arbitrary of hardware and software can’t generally be combined without designing the software to make use of the hardware. &amp;lt;a href=&amp;quot;http://intelligence.org/files/AlgorithmicProgress.pdf&amp;quot;&amp;gt;Grace 2013&amp;lt;/a&amp;gt; includes some work this project.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.2.E Interview AI researchers on the relative importance of hardware and software in driving the progress they have seen.&amp;lt;/strong&amp;gt; ∑20 ∆7&amp;lt;br/&amp;gt;
+                 AI researchers likely have firsthand experience regarding how hardware and software contribute to overall progress within the vicinity of their own work. This project will probably give relatively noisy estimates, but is very cheap compared to others described here. One could just ask for views on this question, and supporting anecdotes, or devise a more structured questionnaire beforehand.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== 1.3 How do hardware and software progress interact? ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Do hardware and software progress relatively independently, or for instance do advances in hardware encourage advances in software? This might change how we generally expect software progress to proceed, and what combinations of hardware and software we expect to first produce human-level AI. We are likely to get some information about this from other projects looking at historical performance data e.g. 1.2.D. For instance, if overall progress is generally proportional to hardware progress, even as hardware progress varies, then this would be suggestive. Below are further possibilities.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.A Find natural experiments &amp;lt;/strong&amp;gt;∑80 ∆4&amp;lt;br/&amp;gt;
+                 Search for performance data from cases where hardware being used for an application was largely constant then shifted upward at some point. Such cases are probably hard to find, and hard to interpret when found. However, a short search for them may be worthwhile.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.B Interview researchers&amp;lt;/strong&amp;gt; ∑20 ∆7&amp;lt;br/&amp;gt;
+                 If hardware tends to affect software research, it is likely that researchers notice this, and can talk about it. This seems a cheap and effective method of learning qualitatively about the topic. This project should probably be combined with 1.2.E.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;blockquote&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;1.3.C Consider plausible models&amp;lt;/strong&amp;gt; ∑10 ∆5&amp;lt;br/&amp;gt;
+                 This is a short theoretical project that would benefit from being done in concert with 1.3.B (interview researchers), since researchers probably have a relatively good understanding of which models are plausible, and we are likely to ask better questions of them if we have thought about the topic. This project should probably be combined with 1.1.A.&amp;lt;/p&amp;gt;
+ &amp;lt;/blockquote&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Returns to scale in research</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/returns_to_scale_in_research?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/returns_to_scale_in_research?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,233 @@
+ ====== Returns to scale in research ======
+ 
+ // Published 06 July, 2016; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;When universities or university departments produce research outputs—such as published papers—they sometimes experience increasing returns to scale, sometimes constant returns to scale, and sometimes decreasing returns to scale. At the level of nations however, R&amp;amp;amp;D tends to see increasing returns to scale. These results are preliminary.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Background&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Returns to scale” refers to the responsiveness of a process’ outputs when all inputs (e.g. researcher hours, equipment) are increased by a certain proportion. If all outputs (e.g. published papers, citations, patents) increase by that same proportion, the process is said to exhibit constant returns to scale. Increasing returns to scale and decreasing returns to scale refer to situations where outputs still increase, but by a higher or lower proportion, respectively.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Assessing returns to scale in research may be useful in predicting certain aspects of the development of artificial intelligence, in particular the dynamics of an intelligence explosion.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Results&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The conclusions in this article are drawn from an incomplete review of academic literature assessing research efficiency, presented in Table 1. These papers assess research in terms of its direct outputs such as published papers, citations, and patents. The broader effects of the research are not considered.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Most of the papers listed below use the &amp;lt;a href=&amp;quot;http://www.springer.com/cda/content/document/cda_downloaddocument/9780387332116-c2.pdf&amp;quot;&amp;gt;Data Envelopment Analysis (DEA) technique&amp;lt;/a&amp;gt;, which is a quantitative technique commonly used to assess the efficiency of universities and research activities. It is capable of isolating the scale efficiency of the individual departments, universities or countries being studied.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;table&amp;gt;
+ &amp;lt;tbody&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Paper&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Level of comparison&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Activities assessed&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Results pertaining to returns to scale&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0048733306002149&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Wang &amp;amp;amp; Huang 2007&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Countries&amp;lt;/strong&amp;gt;’ overall R&amp;amp;amp;D activities&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; in research are exhibited by more than two-thirds of the sample&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0038012105000352&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Kocher, Luptacik &amp;amp;amp; Sutter 2006&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Countries&amp;lt;/strong&amp;gt;’ R&amp;amp;amp;D in economics&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; are found in all countries in the sample except the US&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0048733305000570&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Cherchye &amp;amp;amp; Abeele 2005&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Dutch &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;’ research in Economics and Business Management&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Returns to scale vary between decreasing, constant and increasing depending on each university’s specialization&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.jstor.org/stable/2663642&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Johnes &amp;amp;amp; Johnes 1993&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;UK &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;’ research in economics&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Constant returns to scale&amp;lt;/strong&amp;gt; are found in the sample as a whole&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0038012100000100&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Avkiran 2001&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Australian &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Constant returns to scale&amp;lt;/strong&amp;gt; found in most sampled universities&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/0038012188900080&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Ahn 1988&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;US &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Decreasing returns to scale&amp;lt;/strong&amp;gt; on average&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0272775705000713&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Johnes 2006&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;English &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Close to &amp;lt;strong&amp;gt;constant returns to scale&amp;lt;/strong&amp;gt; exhibited by most universities sampled&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0305048306000211&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Kao &amp;amp;amp; Hung 2008&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Departments&amp;lt;/strong&amp;gt; of a Taiwanese university&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; exhibited by the five most scale-inefficient departments. However, no aggregate measure of returns to scale within the sample is presented.&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;/tbody&amp;gt;
+ &amp;lt;/table&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;i&amp;gt;Table 1&amp;lt;/i&amp;gt;&amp;lt;i&amp;gt;:&amp;lt;/i&amp;gt; &amp;lt;i&amp;gt;Sample of studies of research efficiency that assess returns to scale&amp;lt;/i&amp;gt;&amp;lt;/strong&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Note: This table only identifies increasing/constant/decreasing returns to scale, rather than the size of this effect. Although DEA can measure the relative size of the effect for individual departments/universities/countries within a sample, such results cannot be readily compared between samples/studies.&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Discussion of results&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Of the studies listed in Table 1, the first four are the most relevant to this article, since they focus solely on research inputs and outputs. While the remaining four include educational inputs and outputs, they can still yield worthwhile insights.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Table 1 implies a difference between country-level and university-level returns to scale in research.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The two studies assessing R&amp;amp;amp;D efficiency at the country level, Wang &amp;amp;amp; Huang (2007) and Kocher, Luptacik &amp;amp;amp; Sutter (2006), both identify increasing returns to scale.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The two university-level studies that assessed the scale efficiency of research alone found mixed results. Concretely, Johnes &amp;amp;amp; Johnes (1993) concluded that returns to scale are constant among UK universities, and Cherchye &amp;amp;amp; Abeele (2005) concluded that returns to scale vary among Dutch universities. This ambiguity is echoed by the remainder of the studies listed above, which assess education and research simultaneously and which find evidence of constant, decreasing and increasing returns to scale in different contexts.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Such differences are consistent with the possibility that scale efficiency may be influenced by scale (size) itself. In this framework, as an organisation increases its size, it may experience increasing returns to scale initially, resulting in increased efficiency. However, the efficiency gains from growth may not continue indefinitely; after passing a certain threshold the organisation may experience decreasing returns to scale. The threshold would represent the point of scale efficiency, at which returns to scale are constant and efficiency is maximized with respect to size.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Under this framework, size will influence whether increasing, constant or decreasing returns to scale are experienced. Applying this to research activities, the observation of different returns to scale between country-level and university-level research may mean that the size of a country’s overall research effort and the typical size of its universities are not determined by similar factors. For example, if increasing returns to scale at the country level and decreasing returns to scale at the university level are observed, this may indicate that the overall number of universities is smaller than needed to achieve scale efficiency, but that most of these universities are individually too large to be scale efficient.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Other factors may also contribute to the differences between university-level and country-level observations.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The country level studies use relatively aggregated data, capturing some of the non-university research and development activities in the countries sampled.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Country level research effort is not necessarily subject to some of the constraints which may cause decreasing returns to scale in large universities, such as excessive bureaucracy.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Results may be arbitrarily influenced by differences in the available input and output metrics at the university versus country level.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Limitations to conclusions drawn&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;One limitation of this article is the small scope of the literature review. A more comprehensive review may reveal a different range of conclusions.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Another limitation is that the research outputs studied—published papers, citations, and patents, inter alia—cannot be assumed to correspond directly to incremental knowledge or productivity. This point is expanded upon under “Topics for further investigation” below.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Further limitations arise due to the DEA technique used by most of the studies in Table 1.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;DEA is sensitive to the choice of inputs and outputs, and to measurement errors.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Statistical hypothesis tests are difficult within the DEA framework, making it more difficult to separate signal from noise in interpreting results.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;DEA identifies relative efficiency (composed of scale efficiency and also “pure technical efficiency”) within the sample, meaning that at least one country, university, or department is always identified as fully efficient (including exhibiting full scale efficiency or constant returns to scale). Of course, in practice, no university, organisation or production process is perfectly efficient. Therefore, conclusions drawn from DEA analysis are likely to be more informative for countries, universities, or departments that are not identified as fully efficient.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;It may be questionable whether such a framework—where an optimal scale of production exists, past which decreasing returns to scale are experienced—is a good reflection of the dynamics of research activities. However, the frequent use of the DEA framework in assessing research activities would suggest that it is appropriate.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Topics for further investigation&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The scope of this article is limited to direct research outputs (such as published papers, citations, and patents). While this is valuable, stronger conclusions could be drawn if this analysis were combined with further work investigating the following:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The impact of other sources of new knowledge apart from universities or official R&amp;amp;amp;D expenditure. For example, innovations in company management discovered through “learning by doing” rather than through formal research may be an important source of improvement in economic productivity.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The translation of research outputs (such as published papers, citations, and patents) into incremental knowledge, and the translation of incremental knowledge into extra productive capacity. Assessment of this may be achievable through consideration of the economic returns to research, or of the value of patents generated by research.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Implications for AI&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The scope for an intelligence explosion is likely to be greater if the returns to scale in research are greater. In particular, an AI system capable of conducting research into the improvement of AI may be able to be scaled up faster and more cheaply than the training of human researchers, for example through deployment on additional hardware. In addition, in the period before any intelligence explosion, a scaling-up of AI research may be observed, especially if the resultant technology were seen to have commercial applications.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;This review is one component of a larger project to quantitatively model an intelligence explosion. This project, in addition to drawing upon the conclusions in this article, will also consider inter alia the effect of intelligence on research productivity, and actual increases in artificial intelligence that are plausible from research efforts.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,233 @@
+ ====== Returns to scale in research ======
+ 
+ // Published 06 July, 2016; last updated 28 September, 2017 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;When universities or university departments produce research outputs—such as published papers—they sometimes experience increasing returns to scale, sometimes constant returns to scale, and sometimes decreasing returns to scale. At the level of nations however, R&amp;amp;amp;D tends to see increasing returns to scale. These results are preliminary.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Background&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;“Returns to scale” refers to the responsiveness of a process’ outputs when all inputs (e.g. researcher hours, equipment) are increased by a certain proportion. If all outputs (e.g. published papers, citations, patents) increase by that same proportion, the process is said to exhibit constant returns to scale. Increasing returns to scale and decreasing returns to scale refer to situations where outputs still increase, but by a higher or lower proportion, respectively.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Assessing returns to scale in research may be useful in predicting certain aspects of the development of artificial intelligence, in particular the dynamics of an intelligence explosion.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Results&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The conclusions in this article are drawn from an incomplete review of academic literature assessing research efficiency, presented in Table 1. These papers assess research in terms of its direct outputs such as published papers, citations, and patents. The broader effects of the research are not considered.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Most of the papers listed below use the &amp;lt;a href=&amp;quot;http://www.springer.com/cda/content/document/cda_downloaddocument/9780387332116-c2.pdf&amp;quot;&amp;gt;Data Envelopment Analysis (DEA) technique&amp;lt;/a&amp;gt;, which is a quantitative technique commonly used to assess the efficiency of universities and research activities. It is capable of isolating the scale efficiency of the individual departments, universities or countries being studied.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;table&amp;gt;
+ &amp;lt;tbody&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Paper&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Level of comparison&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Activities assessed&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Results pertaining to returns to scale&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0048733306002149&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Wang &amp;amp;amp; Huang 2007&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Countries&amp;lt;/strong&amp;gt;’ overall R&amp;amp;amp;D activities&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; in research are exhibited by more than two-thirds of the sample&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0038012105000352&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Kocher, Luptacik &amp;amp;amp; Sutter 2006&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Countries&amp;lt;/strong&amp;gt;’ R&amp;amp;amp;D in economics&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; are found in all countries in the sample except the US&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0048733305000570&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Cherchye &amp;amp;amp; Abeele 2005&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Dutch &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;’ research in Economics and Business Management&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Returns to scale vary between decreasing, constant and increasing depending on each university’s specialization&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.jstor.org/stable/2663642&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Johnes &amp;amp;amp; Johnes 1993&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;UK &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;’ research in economics&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Constant returns to scale&amp;lt;/strong&amp;gt; are found in the sample as a whole&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0038012100000100&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Avkiran 2001&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Australian &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Constant returns to scale&amp;lt;/strong&amp;gt; found in most sampled universities&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/0038012188900080&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Ahn 1988&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;US &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Decreasing returns to scale&amp;lt;/strong&amp;gt; on average&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0272775705000713&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Johnes 2006&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;English &amp;lt;strong&amp;gt;universities&amp;lt;/strong&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Close to &amp;lt;strong&amp;gt;constant returns to scale&amp;lt;/strong&amp;gt; exhibited by most universities sampled&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;
+ &amp;lt;a href=&amp;quot;http://www.sciencedirect.com/science/article/pii/S0305048306000211&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Kao &amp;amp;amp; Hung 2008&amp;lt;/strong&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Departments&amp;lt;/strong&amp;gt; of a Taiwanese university&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Research, education&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;strong&amp;gt;Increasing returns to scale&amp;lt;/strong&amp;gt; exhibited by the five most scale-inefficient departments. However, no aggregate measure of returns to scale within the sample is presented.&amp;lt;/span&amp;gt;&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;/tbody&amp;gt;
+ &amp;lt;/table&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;&amp;lt;i&amp;gt;Table 1&amp;lt;/i&amp;gt;&amp;lt;i&amp;gt;:&amp;lt;/i&amp;gt; &amp;lt;i&amp;gt;Sample of studies of research efficiency that assess returns to scale&amp;lt;/i&amp;gt;&amp;lt;/strong&amp;gt;&amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;&amp;lt;br/&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt; &amp;lt;i&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Note: This table only identifies increasing/constant/decreasing returns to scale, rather than the size of this effect. Although DEA can measure the relative size of the effect for individual departments/universities/countries within a sample, such results cannot be readily compared between samples/studies.&amp;lt;/span&amp;gt;&amp;lt;/i&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Discussion of results&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Of the studies listed in Table 1, the first four are the most relevant to this article, since they focus solely on research inputs and outputs. While the remaining four include educational inputs and outputs, they can still yield worthwhile insights.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Table 1 implies a difference between country-level and university-level returns to scale in research.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The two studies assessing R&amp;amp;amp;D efficiency at the country level, Wang &amp;amp;amp; Huang (2007) and Kocher, Luptacik &amp;amp;amp; Sutter (2006), both identify increasing returns to scale.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The two university-level studies that assessed the scale efficiency of research alone found mixed results. Concretely, Johnes &amp;amp;amp; Johnes (1993) concluded that returns to scale are constant among UK universities, and Cherchye &amp;amp;amp; Abeele (2005) concluded that returns to scale vary among Dutch universities. This ambiguity is echoed by the remainder of the studies listed above, which assess education and research simultaneously and which find evidence of constant, decreasing and increasing returns to scale in different contexts.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Such differences are consistent with the possibility that scale efficiency may be influenced by scale (size) itself. In this framework, as an organisation increases its size, it may experience increasing returns to scale initially, resulting in increased efficiency. However, the efficiency gains from growth may not continue indefinitely; after passing a certain threshold the organisation may experience decreasing returns to scale. The threshold would represent the point of scale efficiency, at which returns to scale are constant and efficiency is maximized with respect to size.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Under this framework, size will influence whether increasing, constant or decreasing returns to scale are experienced. Applying this to research activities, the observation of different returns to scale between country-level and university-level research may mean that the size of a country’s overall research effort and the typical size of its universities are not determined by similar factors. For example, if increasing returns to scale at the country level and decreasing returns to scale at the university level are observed, this may indicate that the overall number of universities is smaller than needed to achieve scale efficiency, but that most of these universities are individually too large to be scale efficient.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Other factors may also contribute to the differences between university-level and country-level observations.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The country level studies use relatively aggregated data, capturing some of the non-university research and development activities in the countries sampled.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Country level research effort is not necessarily subject to some of the constraints which may cause decreasing returns to scale in large universities, such as excessive bureaucracy.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Results may be arbitrarily influenced by differences in the available input and output metrics at the university versus country level.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Limitations to conclusions drawn&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;One limitation of this article is the small scope of the literature review. A more comprehensive review may reveal a different range of conclusions.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Another limitation is that the research outputs studied—published papers, citations, and patents, inter alia—cannot be assumed to correspond directly to incremental knowledge or productivity. This point is expanded upon under “Topics for further investigation” below.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Further limitations arise due to the DEA technique used by most of the studies in Table 1.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;DEA is sensitive to the choice of inputs and outputs, and to measurement errors.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;Statistical hypothesis tests are difficult within the DEA framework, making it more difficult to separate signal from noise in interpreting results.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;DEA identifies relative efficiency (composed of scale efficiency and also “pure technical efficiency”) within the sample, meaning that at least one country, university, or department is always identified as fully efficient (including exhibiting full scale efficiency or constant returns to scale). Of course, in practice, no university, organisation or production process is perfectly efficient. Therefore, conclusions drawn from DEA analysis are likely to be more informative for countries, universities, or departments that are not identified as fully efficient.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;It may be questionable whether such a framework—where an optimal scale of production exists, past which decreasing returns to scale are experienced—is a good reflection of the dynamics of research activities. However, the frequent use of the DEA framework in assessing research activities would suggest that it is appropriate.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Topics for further investigation&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The scope of this article is limited to direct research outputs (such as published papers, citations, and patents). While this is valuable, stronger conclusions could be drawn if this analysis were combined with further work investigating the following:&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The impact of other sources of new knowledge apart from universities or official R&amp;amp;amp;D expenditure. For example, innovations in company management discovered through “learning by doing” rather than through formal research may be an important source of improvement in economic productivity.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The translation of research outputs (such as published papers, citations, and patents) into incremental knowledge, and the translation of incremental knowledge into extra productive capacity. Assessment of this may be achievable through consideration of the economic returns to research, or of the value of patents generated by research.&amp;lt;/span&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Implications for AI&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;The scope for an intelligence explosion is likely to be greater if the returns to scale in research are greater. In particular, an AI system capable of conducting research into the improvement of AI may be able to be scaled up faster and more cheaply than the training of human researchers, for example through deployment on additional hardware. In addition, in the period before any intelligence explosion, a scaling-up of AI research may be observed, especially if the resultant technology were seen to have commercial applications.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;span style=&amp;quot;font-weight: 400&amp;quot;&amp;gt;This review is one component of a larger project to quantitatively model an intelligence explosion. This project, in addition to drawing upon the conclusions in this article, will also consider inter alia the effect of intelligence on research productivity, and actual increases in artificial intelligence that are plausible from research efforts.&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Time for AI to cross the human performance range in ImageNet image classification</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/featured_articles/time_for_ai_to_cross_the_human_performance_range_in_imagenet_image_classification?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/featured_articles/time_for_ai_to_cross_the_human_performance_range_in_imagenet_image_classification?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="featured_articles" />
        <content>&lt;pre&gt;
@@ -1 +1,222 @@
+ ====== Time for AI to cross the human performance range in ImageNet image classification ======
+ 
+ // Published 19 October, 2020; last updated 08 March, 2021 //
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Progress in computer image classification performance took:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Over 14 years to reach the level of an untrained human&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;3 years to pass from untrained human level to trained human level&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;5 years to continue from trained human to current performance (2020)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Details =====
+ 
+ 
+ ==== Metric ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;ImageNet&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;lt;strong&amp;amp;gt;ImageNet&amp;amp;lt;/strong&amp;amp;gt;&amp;amp;amp;nbsp;is an image database organized according to the&amp;amp;amp;nbsp;&amp;amp;lt;a rel=&amp;quot;noreferrer noopener&amp;quot; href=&amp;quot;http://wordnet.princeton.edu/&amp;quot; target=&amp;quot;_blank&amp;quot;&amp;amp;gt;WordNet&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node.&amp;amp;amp;nbsp;&amp;amp;amp;#8220;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“ImageNet.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;http://www.image-net.org/&amp;quot;&amp;amp;gt;http://www.image-net.org/&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; is a large collection of images organized into a hierarchy of noun categories. We looked at ‘top-5 accuracy’ in categorizing images. In this task, the player is given an image, and can guess five different categories that the image might represent. It is judged as correct if the image is in fact in any of those five categories.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Human performance milestones ====
+ 
+ 
+ === Beginner level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We used Andrej Karpathy’s interface&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-2683&amp;quot; title=&amp;#039;Karpathy, Andrej. “Ilsvrc.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;quot;&amp;amp;gt;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; for doing the ImageNet top-5 accuracy task ourselves, and asked a few friends to do it. Five people did it, with performances ranging from 74% to 89%, with a median performance of 81%.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This was not a random sample of people, and conditions for taking the test differed. Most notably, there was no time limit, so time allocated was set by patience for trying to marginally improve guesses.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Trained human-level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;ImageNet categorization is not a popular activity for humans, so we do not know what highly talented and trained human performance would look like. The best relatively high human performance measure we have comes from Russakovsky et al, who report on performance of two ‘expert annotators’, who they say learned many of the categories. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-3-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-3-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8216;Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images&amp;amp;amp;#8217;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; The better performing annotator there had a 5.1% error rate.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-4-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-4-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classication error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.&amp;amp;amp;#8221;&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also see Table 9&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== AI achievement of human milestones ====
+ 
+ 
+ === Earliest attempt ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The ImageNet database was released in 2009.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;They presented their database for the first time as a poster at the 2009&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Conference_on_Computer_Vision_and_Pattern_Recognition&amp;quot;&amp;amp;gt;Conference on Computer Vision and Pattern Recognition&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;(CVPR) in Florida.&amp;amp;amp;#8221;&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;“ImageNet.” In &amp;amp;lt;em&amp;amp;gt;Wikipedia&amp;amp;lt;/em&amp;amp;gt;, September 9, 2020. &amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;amp;oldid=977585441&amp;quot;&amp;amp;gt;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;amp;oldid=977585441&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;lt;br&amp;amp;gt;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;. An annual contest, the ImageNet Large Scale Visual Recognition Challenge, began in 2010.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;amp;#8230;The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;In the 2010 contest, the best top-5 classification performance had 28.2% error.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-7-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-7-2683&amp;quot; title=&amp;#039;See table 6.&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;However image classification broadly is older. Pascal VOC was a similar previous contest, which ran from 2005.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-8-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-8-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. “The Pascal Visual Object Classes (VOC) Challenge.” &amp;amp;lt;em&amp;amp;gt;International Journal of Computer Vision&amp;amp;lt;/em&amp;amp;gt; 88, no. 2 (June 2010): 303–38. &amp;amp;lt;a href=&amp;quot;https://doi.org/10.1007/s11263-009-0275-4&amp;quot;&amp;amp;gt;https://doi.org/10.1007/s11263-009-0275-4&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We do not know when the first successful image classification systems were developed. In a blog post, Amidi &amp;amp;amp; Amidi point to LeNet as pioneering work in image classification&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-9-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-9-2683&amp;quot; title=&amp;#039;See section &amp;amp;amp;#8216;LeNet&amp;amp;amp;#8217;.&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;“The Evolution of Image Classification Explained.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;quot;&amp;amp;gt;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;9&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;, and it appears to have been developed in 1998.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-10-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-10-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;lt;strong&amp;amp;gt;LeNet&amp;amp;lt;/strong&amp;amp;gt;&amp;amp;amp;nbsp;is a&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Convolutional_neural_network&amp;quot;&amp;amp;gt;convolutional neural network&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;structure proposed by&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Yann_LeCun&amp;quot;&amp;amp;gt;Yann LeCun&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;et al. in 1998.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“LeNet.” In &amp;amp;lt;em&amp;amp;gt;Wikipedia&amp;amp;lt;/em&amp;amp;gt;, June 19, 2020. &amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;amp;oldid=963418885&amp;quot;&amp;amp;gt;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;amp;oldid=963418885&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Beginner level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The first entrant in the ImageNet contest to perform better than our beginner level benchmark was SuperVision (commonly known as AlexNet) in 2012, with a 15.3% error rate.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;We also entered a variant of this model in the&amp;amp;lt;br&amp;amp;gt;ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks.” In &amp;amp;lt;em&amp;amp;gt;Advances in Neural Information Processing Systems 25&amp;amp;lt;/em&amp;amp;gt;, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc., 2012. &amp;amp;lt;a href=&amp;quot;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;quot;&amp;amp;gt;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also, see Table 6 for a list of other entrants: &amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Superhuman level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;In 2015 He et al apparently achieved a 4.5% error rate, slightly better than our high human benchmark.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-12-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-12-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;Our 152-layer ResNet has a single-model top-5 validation error of 4.49%.&amp;amp;amp;#8221; &amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also see Table 4&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1512.03385 [Cs]&amp;amp;lt;/em&amp;amp;gt;, December 10, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1512.03385&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1512.03385&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;12&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Current level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;According to paperswithcode.com, performance has continued to climb, to 2020, though slower than earlier.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-13-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-13-2683&amp;quot; title=&amp;#039;See figure:&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“Papers with Code &amp;amp;amp;#8211; ImageNet Benchmark (Image Classification).” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;quot;&amp;amp;gt;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;13&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Times for AI to cross human-relative ranges  ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Given the above dates, we have:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;figure class=&amp;quot;wp-block-table&amp;quot;&amp;gt;
+ &amp;lt;table&amp;gt;
+ &amp;lt;tbody&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Range&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;Start&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;End&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;Duration (years)&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;First attempt to beginner level&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;lt;1998&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2012&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;14&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Beginner to superhuman&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2012&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2015&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Above superhuman&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2015&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;2020&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;5&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;/tbody&amp;gt;
+ &amp;lt;/table&amp;gt;
+ &amp;lt;/figure&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Primary author: Rick Korzekwa&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Notes =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“&amp;lt;strong&amp;gt;ImageNet&amp;lt;/strong&amp;gt; is an image database organized according to the &amp;lt;a href=&amp;quot;http://wordnet.princeton.edu/&amp;quot; rel=&amp;quot;noreferrer noopener&amp;quot; target=&amp;quot;_blank&amp;quot;&amp;gt;WordNet&amp;lt;/a&amp;gt; hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node. “
+                   &amp;lt;p&amp;gt;“ImageNet.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;http://www.image-net.org/&amp;quot;&amp;gt;http://www.image-net.org/&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Karpathy, Andrej. “Ilsvrc.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;quot;&amp;gt;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-3-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;‘Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images’
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-3-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classication error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also see Table 9
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“They presented their database for the first time as a poster at the 2009 &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Conference_on_Computer_Vision_and_Pattern_Recognition&amp;quot;&amp;gt;Conference on Computer Vision and Pattern Recognition&amp;lt;/a&amp;gt; (CVPR) in Florida.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   “ImageNet.” In &amp;lt;em&amp;gt;Wikipedia&amp;lt;/em&amp;gt;, September 9, 2020. &amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;oldid=977585441&amp;quot;&amp;gt;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;oldid=977585441&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
+ &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“…The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.”
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-7-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See table 6.
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-7-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-8-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.”
+                   &amp;lt;p&amp;gt;Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. “The Pascal Visual Object Classes (VOC) Challenge.” &amp;lt;em&amp;gt;International Journal of Computer Vision&amp;lt;/em&amp;gt; 88, no. 2 (June 2010): 303–38. &amp;lt;a href=&amp;quot;https://doi.org/10.1007/s11263-009-0275-4&amp;quot;&amp;gt;https://doi.org/10.1007/s11263-009-0275-4&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-8-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-9-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See section ‘LeNet’.&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   “The Evolution of Image Classification Explained.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;quot;&amp;gt;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-9-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-10-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“&amp;lt;strong&amp;gt;LeNet&amp;lt;/strong&amp;gt; is a &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Convolutional_neural_network&amp;quot;&amp;gt;convolutional neural network&amp;lt;/a&amp;gt; structure proposed by &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Yann_LeCun&amp;quot;&amp;gt;Yann LeCun&amp;lt;/a&amp;gt; et al. in 1998.”
+                   &amp;lt;p&amp;gt;“LeNet.” In &amp;lt;em&amp;gt;Wikipedia&amp;lt;/em&amp;gt;, June 19, 2020. &amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;oldid=963418885&amp;quot;&amp;gt;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;oldid=963418885&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-10-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-11-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“We also entered a variant of this model in the&amp;lt;br/&amp;gt;
+                   ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%”
+                   &amp;lt;p&amp;gt;Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks.” In &amp;lt;em&amp;gt;Advances in Neural Information Processing Systems 25&amp;lt;/em&amp;gt;, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc., 2012. &amp;lt;a href=&amp;quot;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;quot;&amp;gt;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also, see Table 6 for a list of other entrants:&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-11-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-12-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“Our 152-layer ResNet has a single-model top-5 validation error of 4.49%.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also see Table 4&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” &amp;lt;em&amp;gt;ArXiv:1512.03385 [Cs]&amp;lt;/em&amp;gt;, December 10, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1512.03385&amp;quot;&amp;gt;http://arxiv.org/abs/1512.03385&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-12-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-13-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See figure:
+                   &amp;lt;p&amp;gt;“Papers with Code – ImageNet Benchmark (Image Classification).” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;quot;&amp;gt;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-13-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,222 @@
+ ====== Time for AI to cross the human performance range in ImageNet image classification ======
+ 
+ // Published 19 October, 2020; last updated 08 March, 2021 //
+ 
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Progress in computer image classification performance took:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ul&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;Over 14 years to reach the level of an untrained human&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;3 years to pass from untrained human level to trained human level&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;5 years to continue from trained human to current performance (2020)&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ul&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ 
+ ===== Details =====
+ 
+ 
+ ==== Metric ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;ImageNet&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;lt;strong&amp;amp;gt;ImageNet&amp;amp;lt;/strong&amp;amp;gt;&amp;amp;amp;nbsp;is an image database organized according to the&amp;amp;amp;nbsp;&amp;amp;lt;a rel=&amp;quot;noreferrer noopener&amp;quot; href=&amp;quot;http://wordnet.princeton.edu/&amp;quot; target=&amp;quot;_blank&amp;quot;&amp;amp;gt;WordNet&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node.&amp;amp;amp;nbsp;&amp;amp;amp;#8220;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“ImageNet.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;http://www.image-net.org/&amp;quot;&amp;amp;gt;http://www.image-net.org/&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; is a large collection of images organized into a hierarchy of noun categories. We looked at ‘top-5 accuracy’ in categorizing images. In this task, the player is given an image, and can guess five different categories that the image might represent. It is judged as correct if the image is in fact in any of those five categories.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Human performance milestones ====
+ 
+ 
+ === Beginner level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;We used Andrej Karpathy’s interface&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-2-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-2-2683&amp;quot; title=&amp;#039;Karpathy, Andrej. “Ilsvrc.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;quot;&amp;amp;gt;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; for doing the ImageNet top-5 accuracy task ourselves, and asked a few friends to do it. Five people did it, with performances ranging from 74% to 89%, with a median performance of 81%.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This was not a random sample of people, and conditions for taking the test differed. Most notably, there was no time limit, so time allocated was set by patience for trying to marginally improve guesses.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Trained human-level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;ImageNet categorization is not a popular activity for humans, so we do not know what highly talented and trained human performance would look like. The best relatively high human performance measure we have comes from Russakovsky et al, who report on performance of two ‘expert annotators’, who they say learned many of the categories. &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-3-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-3-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8216;Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images&amp;amp;amp;#8217;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; The better performing annotator there had a 5.1% error rate.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-4-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-4-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classication error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.&amp;amp;amp;#8221;&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also see Table 9&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== AI achievement of human milestones ====
+ 
+ 
+ === Earliest attempt ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The ImageNet database was released in 2009.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-5-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-5-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;They presented their database for the first time as a poster at the 2009&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Conference_on_Computer_Vision_and_Pattern_Recognition&amp;quot;&amp;amp;gt;Conference on Computer Vision and Pattern Recognition&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;(CVPR) in Florida.&amp;amp;amp;#8221;&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;“ImageNet.” In &amp;amp;lt;em&amp;amp;gt;Wikipedia&amp;amp;lt;/em&amp;amp;gt;, September 9, 2020. &amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;amp;oldid=977585441&amp;quot;&amp;amp;gt;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;amp;oldid=977585441&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;lt;br&amp;amp;gt;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;. An annual contest, the ImageNet Large Scale Visual Recognition Challenge, began in 2010.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-6-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-6-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;amp;#8230;The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;In the 2010 contest, the best top-5 classification performance had 28.2% error.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-7-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-7-2683&amp;quot; title=&amp;#039;See table 6.&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;7&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;However image classification broadly is older. Pascal VOC was a similar previous contest, which ran from 2005.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-8-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-8-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. “The Pascal Visual Object Classes (VOC) Challenge.” &amp;amp;lt;em&amp;amp;gt;International Journal of Computer Vision&amp;amp;lt;/em&amp;amp;gt; 88, no. 2 (June 2010): 303–38. &amp;amp;lt;a href=&amp;quot;https://doi.org/10.1007/s11263-009-0275-4&amp;quot;&amp;amp;gt;https://doi.org/10.1007/s11263-009-0275-4&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt; We do not know when the first successful image classification systems were developed. In a blog post, Amidi &amp;amp;amp; Amidi point to LeNet as pioneering work in image classification&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-9-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-9-2683&amp;quot; title=&amp;#039;See section &amp;amp;amp;#8216;LeNet&amp;amp;amp;#8217;.&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;“The Evolution of Image Classification Explained.” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;quot;&amp;amp;gt;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;9&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;, and it appears to have been developed in 1998.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-10-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-10-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;&amp;amp;lt;strong&amp;amp;gt;LeNet&amp;amp;lt;/strong&amp;amp;gt;&amp;amp;amp;nbsp;is a&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Convolutional_neural_network&amp;quot;&amp;amp;gt;convolutional neural network&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;structure proposed by&amp;amp;amp;nbsp;&amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Yann_LeCun&amp;quot;&amp;amp;gt;Yann LeCun&amp;amp;lt;/a&amp;amp;gt;&amp;amp;amp;nbsp;et al. in 1998.&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“LeNet.” In &amp;amp;lt;em&amp;amp;gt;Wikipedia&amp;amp;lt;/em&amp;amp;gt;, June 19, 2020. &amp;amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;amp;oldid=963418885&amp;quot;&amp;amp;gt;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;amp;oldid=963418885&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Beginner level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;The first entrant in the ImageNet contest to perform better than our beginner level benchmark was SuperVision (commonly known as AlexNet) in 2012, with a 15.3% error rate.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-11-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-11-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;We also entered a variant of this model in the&amp;amp;lt;br&amp;amp;gt;ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%&amp;amp;amp;#8221;&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks.” In &amp;amp;lt;em&amp;amp;gt;Advances in Neural Information Processing Systems 25&amp;amp;lt;/em&amp;amp;gt;, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc., 2012. &amp;amp;lt;a href=&amp;quot;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;quot;&amp;amp;gt;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;amp;lt;/a&amp;amp;gt;.&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also, see Table 6 for a list of other entrants: &amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1409.0575 [Cs]&amp;amp;lt;/em&amp;amp;gt;, January 29, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1409.0575&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Superhuman level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;In 2015 He et al apparently achieved a 4.5% error rate, slightly better than our high human benchmark.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-12-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-12-2683&amp;quot; title=&amp;#039;&amp;amp;amp;#8220;Our 152-layer ResNet has a single-model top-5 validation error of 4.49%.&amp;amp;amp;#8221; &amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;Also see Table 4&amp;amp;lt;br&amp;amp;gt;&amp;amp;lt;br&amp;amp;gt;He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” &amp;amp;lt;em&amp;amp;gt;ArXiv:1512.03385 [Cs]&amp;amp;lt;/em&amp;amp;gt;, December 10, 2015. &amp;amp;lt;a href=&amp;quot;http://arxiv.org/abs/1512.03385&amp;quot;&amp;amp;gt;http://arxiv.org/abs/1512.03385&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;12&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ === Current level ===
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;According to paperswithcode.com, performance has continued to climb, to 2020, though slower than earlier.&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-13-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-13-2683&amp;quot; title=&amp;#039;See figure:&amp;amp;lt;/p&amp;amp;gt; &amp;amp;lt;p&amp;amp;gt;“Papers with Code &amp;amp;amp;#8211; ImageNet Benchmark (Image Classification).” Accessed October 19, 2020. &amp;amp;lt;a href=&amp;quot;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;quot;&amp;amp;gt;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;amp;lt;/a&amp;amp;gt;.&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;13&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ==== Times for AI to cross human-relative ranges  ====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;Given the above dates, we have:&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;figure class=&amp;quot;wp-block-table&amp;quot;&amp;gt;
+ &amp;lt;table&amp;gt;
+ &amp;lt;tbody&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Range&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;Start&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;End&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;Duration (years)&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;First attempt to beginner level&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;lt;1998&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2012&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;14&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Beginner to superhuman&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2012&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2015&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;tr&amp;gt;
+ &amp;lt;td&amp;gt;Above superhuman&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;2015&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;2020&amp;lt;/td&amp;gt;
+ &amp;lt;td&amp;gt;&amp;amp;gt;5&amp;lt;/td&amp;gt;
+ &amp;lt;/tr&amp;gt;
+ &amp;lt;/tbody&amp;gt;
+ &amp;lt;/table&amp;gt;
+ &amp;lt;/figure&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Primary author: Rick Korzekwa&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== Notes =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol class=&amp;quot;easy-footnotes-wrapper&amp;quot;&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-1-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“&amp;lt;strong&amp;gt;ImageNet&amp;lt;/strong&amp;gt; is an image database organized according to the &amp;lt;a href=&amp;quot;http://wordnet.princeton.edu/&amp;quot; rel=&amp;quot;noreferrer noopener&amp;quot; target=&amp;quot;_blank&amp;quot;&amp;gt;WordNet&amp;lt;/a&amp;gt; hierarchy (currently only the nouns), in which each node of the hierarchy is depicted by hundreds and thousands of images. Currently we have an average of over five hundred images per node. “
+                   &amp;lt;p&amp;gt;“ImageNet.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;http://www.image-net.org/&amp;quot;&amp;gt;http://www.image-net.org/&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-1-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-2-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;Karpathy, Andrej. “Ilsvrc.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;quot;&amp;gt;https://cs.stanford.edu/people/karpathy/ilsvrc/&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-2-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-3-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;‘Therefore, in evaluating the human accuracy we relied primarily on expert annotators who learned to recognize a large portion of the 1000 ILSVRC classes. During training, the annotators labeled a few hundred validation images for practice and later switched to the test set images’
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-3-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-4-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“Annotator A1 evaluated a total of 1500 test set images. The GoogLeNet classication error on this sample was estimated to be 6.8% (recall that the error on full test set of 100,000 images is 6.7%, as shown in Table 7). The human error was estimated to be 5.1%.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also see Table 9
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-4-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-5-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“They presented their database for the first time as a poster at the 2009 &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Conference_on_Computer_Vision_and_Pattern_Recognition&amp;quot;&amp;gt;Conference on Computer Vision and Pattern Recognition&amp;lt;/a&amp;gt; (CVPR) in Florida.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   “ImageNet.” In &amp;lt;em&amp;gt;Wikipedia&amp;lt;/em&amp;gt;, September 9, 2020. &amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;oldid=977585441&amp;quot;&amp;gt;https://en.wikipedia.org/w/index.php?title=ImageNet&amp;amp;amp;oldid=977585441&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
+ &amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-5-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-6-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“…The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been running annually for five years (since 2010) and has become the standard benchmark for large-scale object recognition.”
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-6-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-7-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See table 6.
+                   &amp;lt;p&amp;gt;Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-7-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-8-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“The PASCAL Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection.”
+                   &amp;lt;p&amp;gt;Everingham, Mark, Luc Van Gool, Christopher K. I. Williams, John Winn, and Andrew Zisserman. “The Pascal Visual Object Classes (VOC) Challenge.” &amp;lt;em&amp;gt;International Journal of Computer Vision&amp;lt;/em&amp;gt; 88, no. 2 (June 2010): 303–38. &amp;lt;a href=&amp;quot;https://doi.org/10.1007/s11263-009-0275-4&amp;quot;&amp;gt;https://doi.org/10.1007/s11263-009-0275-4&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-8-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-9-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See section ‘LeNet’.&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   “The Evolution of Image Classification Explained.” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;quot;&amp;gt;https://stanford.edu/~shervine/blog/evolution-image-classification-explained#lenet&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-9-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-10-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“&amp;lt;strong&amp;gt;LeNet&amp;lt;/strong&amp;gt; is a &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Convolutional_neural_network&amp;quot;&amp;gt;convolutional neural network&amp;lt;/a&amp;gt; structure proposed by &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/Yann_LeCun&amp;quot;&amp;gt;Yann LeCun&amp;lt;/a&amp;gt; et al. in 1998.”
+                   &amp;lt;p&amp;gt;“LeNet.” In &amp;lt;em&amp;gt;Wikipedia&amp;lt;/em&amp;gt;, June 19, 2020. &amp;lt;a href=&amp;quot;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;oldid=963418885&amp;quot;&amp;gt;https://en.wikipedia.org/w/index.php?title=LeNet&amp;amp;amp;oldid=963418885&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-10-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-11-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“We also entered a variant of this model in the&amp;lt;br/&amp;gt;
+                   ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%”
+                   &amp;lt;p&amp;gt;Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E Hinton. “ImageNet Classification with Deep Convolutional Neural Networks.” In &amp;lt;em&amp;gt;Advances in Neural Information Processing Systems 25&amp;lt;/em&amp;gt;, edited by F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, 1097–1105. Curran Associates, Inc., 2012. &amp;lt;a href=&amp;quot;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;quot;&amp;gt;http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf&amp;lt;/a&amp;gt;.&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also, see Table 6 for a list of other entrants:&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Russakovsky, Olga, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, et al. “ImageNet Large Scale Visual Recognition Challenge.” &amp;lt;em&amp;gt;ArXiv:1409.0575 [Cs]&amp;lt;/em&amp;gt;, January 29, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1409.0575&amp;quot;&amp;gt;http://arxiv.org/abs/1409.0575&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-11-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-12-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;“Our 152-layer ResNet has a single-model top-5 validation error of 4.49%.”&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   Also see Table 4&amp;lt;br/&amp;gt;
+ &amp;lt;br/&amp;gt;
+                   He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. “Deep Residual Learning for Image Recognition.” &amp;lt;em&amp;gt;ArXiv:1512.03385 [Cs]&amp;lt;/em&amp;gt;, December 10, 2015. &amp;lt;a href=&amp;quot;http://arxiv.org/abs/1512.03385&amp;quot;&amp;gt;http://arxiv.org/abs/1512.03385&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-12-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-bottom-13-2683&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;See figure:
+                   &amp;lt;p&amp;gt;“Papers with Code – ImageNet Benchmark (Image Classification).” Accessed October 19, 2020. &amp;lt;a href=&amp;quot;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;quot;&amp;gt;https://paperswithcode.com/sota/image-classification-on-imagenet&amp;lt;/a&amp;gt;.&amp;lt;a class=&amp;quot;easy-footnote-to-top&amp;quot; href=&amp;quot;#easy-footnote-13-2683&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
</feed>
