<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki uncategorized</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-04-29T16:14:51+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>AI labs' statements on governance</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/ai_labs_statements_on_governance?rev=1689361410&amp;do=diff"/>
        <published>2023-07-14T19:03:30+00:00</published>
        <updated>2023-07-14T19:03:30+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/ai_labs_statements_on_governance?rev=1689361410&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -354,4 +354,6 @@
  &amp;gt;
  &amp;gt;My personal view is that this is such a big thing in its fullness of time. I think it&amp;#039;s bigger than any one corporation or even one nation. I think it needs international cooperation. I&amp;#039;ve often talked in the past about a CERN-like effort for A.G.I., and I quite like to see something like that as we get closer, maybe in many years from now, to an A.G.I. system, where really careful research is done on the safety side of things, understanding what these systems can do, and maybe testing them in controlled conditions, like simulations or games first, like sandboxes, very robust sandboxes with lots of cybersecurity protection around them. I think that would be a good way forward as we get closer towards human-level A.I. systems.
  
  
+ ----
+ //Primary Author: Zach Stein-Perlman//

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -354,4 +354,6 @@
  &amp;gt;
  &amp;gt;My personal view is that this is such a big thing in its fullness of time. I think it&amp;#039;s bigger than any one corporation or even one nation. I think it needs international cooperation. I&amp;#039;ve often talked in the past about a CERN-like effort for A.G.I., and I quite like to see something like that as we get closer, maybe in many years from now, to an A.G.I. system, where really careful research is done on the safety side of things, understanding what these systems can do, and maybe testing them in controlled conditions, like simulations or games first, like sandboxes, very robust sandboxes with lots of cybersecurity protection around them. I think that would be a good way forward as we get closer towards human-level A.I. systems.
  
  
+ ----
+ //Primary Author: Zach Stein-Perlman//

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Surveys of experts on levels of AI Risk</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/ai_risk_surveys?rev=1723491279&amp;do=diff"/>
        <published>2024-08-12T19:34:39+00:00</published>
        <updated>2024-08-12T19:34:39+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/ai_risk_surveys?rev=1723491279&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -2,21 +2,19 @@
  EDITOR COMMENTS:
  
  -Harlan: we need to add details about the 2023 survey here, and the generation lab thing
  */
- ====== AI Risk Surveys ======
+ ====== Surveys of experts on levels of AI Risk ======
  
  //Published 9 May 2023; last updated 23 May 2023//
  
  //This page is being updated, and may be low quality.//
  
  We know of six surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.
  
- ===== Details =====
+ ===== Surveys of AI experts =====
  
- ==== Surveys of AI experts ====
- 
- === 2016 Expert Survey on Progress in AI ===
+ ==== 2016 Expert Survey on Progress in AI ====
  
  //(Main article: [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai|2016 Expert Survey on Progress in AI]])//
  
  Paper:  [[https://jair.org/index.php/jair/article/view/11222|When Will AI Exceed Human Performance? Evidence from AI Experts]] (Grace et al. 2016, published 2018)
@@ -46,16 +44,24 @@
  
      * Population: authors of papers at ICML or NeurIPS 2015
        * The survey was sent to 1634 people and received 352 responses.
  
- === Zhang et al 2019 ===
+ ==== Zhang et al 2019 ====
  
    * [[https://arxiv.org/pdf/2206.04132.pdf#page=24|Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers]] (Zhang et al. 2019, published 2022)
      * Long-run impact of high-level machine intelligence
        * &amp;quot;Our 2019 survey respondents appeared optimistic about how advances in AI/ML will impact humanity. They predicted that HLMI will be net positive for humanity, with the expected value between &amp;#039;on balance good&amp;#039; and neutral. The median AI/ML researcher ascribed a probability of 20% that the long-run impact of HLMI on humanity would be &amp;#039;extremely good (e.g., rapid growth in human flourishing)&amp;#039;, 27% that it would be &amp;#039;on balance good&amp;#039;, 16% that it would be &amp;#039;more or less neutral&amp;#039;, and 10% that it would be &amp;#039;on balance bad&amp;#039;. The median respondent placed a 2% probability on HLMI being [] &amp;#039;extremely bad (e.g., human extinction)&amp;#039;.&amp;quot;
      * Population: authors of papers at ICML or NeurIPS 2018
        * The survey was sent to 2652 people and received 524 responses.
-   * [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]] (Grace et al. 2022, publication forthcoming in 2023)
+ 
+ ==== 2022 Expert Survey on Progress in AI ====
+ 
+ 
+ //(Main article: [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]])//
+ 
+   * [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]] (Grace et al. 2022)
+ 
+ 
      * Extinction
        * &amp;quot;What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?
          * Median 5%; 44% at least 10%
        * &amp;quot;What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?&amp;quot;
@@ -79,16 +85,25 @@
        * Respondents were presented with a definition of &amp;quot;AI safety research,&amp;quot; then asked &amp;quot;How much should society prioritize **AI safety research**, relative to how much it is currently prioritized?&amp;quot;
          * 2% &amp;quot;much less&amp;quot;; 9% &amp;quot;less&amp;quot;; 20% &amp;quot;about the same as it is now&amp;quot;; 35% &amp;quot;more&amp;quot;; 33% &amp;quot;much more&amp;quot;
    * Population: authors of papers at ICML or NeurIPS 2021
      * The survey was sent to &amp;quot;approximately 4271&amp;quot; people and received 738 responses.
+ 
+ ==== Michael et al 2022 ====
+ 
    * [[https://arxiv.org/pdf/2208.12852.pdf#page=10|What Do NLP Researchers Believe? Results of the NLP Community Metasurvey]] (Michael et al. 2022)
      * Nuclear-level catastrophe
        * &amp;quot;**AI decisions could cause nuclear-level catastrophe.** It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.&amp;quot;
          * 36% agree; 64% disagree
      * Population: &amp;quot;researchers who publish at computational linguistics conferences.&amp;quot; See pp. 3–4 for details.
        * &amp;quot;​​We compute that 6323 people [published at least two papers at computational linguistics conferences] during the survey period according to publication data in the ACL Anthology, meaning we have survey responses from about 5% of the total.&amp;quot;
+ 
+ ==== Generation Lab 2023 ====
+ 
    * [[https://www.generationlab.org/axios-generationlab-syracuse|
  AI EXPERT SURVEY (n=216 computer science professors)]] (Generation Lab, 2023)
+ 
+ ==== Expert Survey on Progress in AI 2023 ====
+ 
    * [[https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai|2023 Expert Survey on Progress in AI]] (Grace et al. 2023)
  
  === Not currently included on this list ===
    * The informal [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:kruel_ai_interviews|Alexander Kruel interviews]] from 2011–2012.

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -2,21 +2,19 @@
  EDITOR COMMENTS:
  
  -Harlan: we need to add details about the 2023 survey here, and the generation lab thing
  */
- ====== AI Risk Surveys ======
+ ====== Surveys of experts on levels of AI Risk ======
  
  //Published 9 May 2023; last updated 23 May 2023//
  
  //This page is being updated, and may be low quality.//
  
  We know of six surveys of AI experts and two surveys of AI safety/governance experts on risks from advanced AI.
  
- ===== Details =====
+ ===== Surveys of AI experts =====
  
- ==== Surveys of AI experts ====
- 
- === 2016 Expert Survey on Progress in AI ===
+ ==== 2016 Expert Survey on Progress in AI ====
  
  //(Main article: [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2016_expert_survey_on_progress_in_ai|2016 Expert Survey on Progress in AI]])//
  
  Paper:  [[https://jair.org/index.php/jair/article/view/11222|When Will AI Exceed Human Performance? Evidence from AI Experts]] (Grace et al. 2016, published 2018)
@@ -46,16 +44,24 @@
  
      * Population: authors of papers at ICML or NeurIPS 2015
        * The survey was sent to 1634 people and received 352 responses.
  
- === Zhang et al 2019 ===
+ ==== Zhang et al 2019 ====
  
    * [[https://arxiv.org/pdf/2206.04132.pdf#page=24|Forecasting AI Progress: Evidence from a Survey of Machine Learning Researchers]] (Zhang et al. 2019, published 2022)
      * Long-run impact of high-level machine intelligence
        * &amp;quot;Our 2019 survey respondents appeared optimistic about how advances in AI/ML will impact humanity. They predicted that HLMI will be net positive for humanity, with the expected value between &amp;#039;on balance good&amp;#039; and neutral. The median AI/ML researcher ascribed a probability of 20% that the long-run impact of HLMI on humanity would be &amp;#039;extremely good (e.g., rapid growth in human flourishing)&amp;#039;, 27% that it would be &amp;#039;on balance good&amp;#039;, 16% that it would be &amp;#039;more or less neutral&amp;#039;, and 10% that it would be &amp;#039;on balance bad&amp;#039;. The median respondent placed a 2% probability on HLMI being [] &amp;#039;extremely bad (e.g., human extinction)&amp;#039;.&amp;quot;
      * Population: authors of papers at ICML or NeurIPS 2018
        * The survey was sent to 2652 people and received 524 responses.
-   * [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]] (Grace et al. 2022, publication forthcoming in 2023)
+ 
+ ==== 2022 Expert Survey on Progress in AI ====
+ 
+ 
+ //(Main article: [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]])//
+ 
+   * [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]] (Grace et al. 2022)
+ 
+ 
      * Extinction
        * &amp;quot;What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?
          * Median 5%; 44% at least 10%
        * &amp;quot;What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?&amp;quot;
@@ -79,16 +85,25 @@
        * Respondents were presented with a definition of &amp;quot;AI safety research,&amp;quot; then asked &amp;quot;How much should society prioritize **AI safety research**, relative to how much it is currently prioritized?&amp;quot;
          * 2% &amp;quot;much less&amp;quot;; 9% &amp;quot;less&amp;quot;; 20% &amp;quot;about the same as it is now&amp;quot;; 35% &amp;quot;more&amp;quot;; 33% &amp;quot;much more&amp;quot;
    * Population: authors of papers at ICML or NeurIPS 2021
      * The survey was sent to &amp;quot;approximately 4271&amp;quot; people and received 738 responses.
+ 
+ ==== Michael et al 2022 ====
+ 
    * [[https://arxiv.org/pdf/2208.12852.pdf#page=10|What Do NLP Researchers Believe? Results of the NLP Community Metasurvey]] (Michael et al. 2022)
      * Nuclear-level catastrophe
        * &amp;quot;**AI decisions could cause nuclear-level catastrophe.** It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.&amp;quot;
          * 36% agree; 64% disagree
      * Population: &amp;quot;researchers who publish at computational linguistics conferences.&amp;quot; See pp. 3–4 for details.
        * &amp;quot;​​We compute that 6323 people [published at least two papers at computational linguistics conferences] during the survey period according to publication data in the ACL Anthology, meaning we have survey responses from about 5% of the total.&amp;quot;
+ 
+ ==== Generation Lab 2023 ====
+ 
    * [[https://www.generationlab.org/axios-generationlab-syracuse|
  AI EXPERT SURVEY (n=216 computer science professors)]] (Generation Lab, 2023)
+ 
+ ==== Expert Survey on Progress in AI 2023 ====
+ 
    * [[https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai|2023 Expert Survey on Progress in AI]] (Grace et al. 2023)
  
  === Not currently included on this list ===
    * The informal [[https://wiki.aiimpacts.org/doku.php?id=ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:kruel_ai_interviews|Alexander Kruel interviews]] from 2011–2012.

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>AI Safety Arguments Affected by Chaos</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/ai_safety_arguments_affected_by_chaos?rev=1681338758&amp;do=diff"/>
        <published>2023-04-12T22:32:38+00:00</published>
        <updated>2023-04-12T22:32:38+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/ai_safety_arguments_affected_by_chaos?rev=1681338758&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -3,5 +3,7 @@
  //Created 31 March, 2023. Last updated 31 March, 2023.//
+ 
+ //This page is under review and may be updated soon.//
  
  Chaos theory allows us to show that some predictions cannot be reliably made, even using arbitrary intelligence. Some things about human brains seem to be in that category, which affects how advanced AI might interact with humans.
  
  ===== Details =====

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -3,5 +3,7 @@
  //Created 31 March, 2023. Last updated 31 March, 2023.//
+ 
+ //This page is under review and may be updated soon.//
  
  Chaos theory allows us to show that some predictions cannot be reliably made, even using arbitrary intelligence. Some things about human brains seem to be in that category, which affects how advanced AI might interact with humans.
  
  ===== Details =====

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Cognitive capabilities of insects</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/bugs_cognitive_capabilities?rev=1690146657&amp;do=diff"/>
        <published>2023-07-23T21:10:57+00:00</published>
        <updated>2023-07-23T21:10:57+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/bugs_cognitive_capabilities?rev=1690146657&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -171,6 +171,6 @@
  Despite the ants&amp;#039; apparent flexibility in object use, none of these studies demonstrate that ants have an understanding of these instruments as tools. It seems fairly likely, for instance, that this flexibility stems from a combination of several hardwired cues such as softness of material, viscosity of liquid, etc., that enable them to choose objects well suited to the task at hand.
  
  //Primary author: Aysja Johnson//
  
- ==== Notes ====
+ ===== Notes =====
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -171,6 +171,6 @@
  Despite the ants&amp;#039; apparent flexibility in object use, none of these studies demonstrate that ants have an understanding of these instruments as tools. It seems fairly likely, for instance, that this flexibility stems from a combination of several hardwired cues such as softness of material, viscosity of liquid, etc., that enable them to choose objects well suited to the task at hand.
  
  //Primary author: Aysja Johnson//
  
- ==== Notes ====
+ ===== Notes =====
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Capabilities of state-of-the-art AI, 2024</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/capabilities_of_sota_ai?rev=1733867458&amp;do=diff"/>
        <published>2024-12-10T21:50:58+00:00</published>
        <updated>2024-12-10T21:50:58+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/capabilities_of_sota_ai?rev=1733867458&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -1,4 +1,18 @@
+ /*Editor&amp;#039;s note:
+ Some things to add to this page, if someone wants to update it at some point:
+ -GPT-4o advanced voice mode
+ -GDM GenCast SOTA weather forecasting
+ -Sora
+ -o1 reasoning abilities
+ -Genie 2 and GameNGen
+ -Hacking milestone from Google&amp;#039;s Big Sleep
+ -Forecasting capabilities https://arxiv.org/abs/2409.19839
+ -METR&amp;#039;s report on automating AI R&amp;amp;D https://x.com/METR_Evals/status/1860061711849652378
+ -Evaluating Neuroscience results https://medicalxpress.com/news/2024-11-ai-neuroscience-results-human-experts.html
+ -Math https://x.com/robertghrist/status/1841462507543949581?t=5zV3VpQI0mbrSU9_QRtfkQ&amp;amp;s=19
+ */
+ 
  ====== Capabilities of state-of-the-art AI, 2024 ======
  
  This is a list of some noteworthy capabilities of current state-of-the-art AI in various categories. Last updated 1/24/2024
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,4 +1,18 @@
+ /*Editor&amp;#039;s note:
+ Some things to add to this page, if someone wants to update it at some point:
+ -GPT-4o advanced voice mode
+ -GDM GenCast SOTA weather forecasting
+ -Sora
+ -o1 reasoning abilities
+ -Genie 2 and GameNGen
+ -Hacking milestone from Google&amp;#039;s Big Sleep
+ -Forecasting capabilities https://arxiv.org/abs/2409.19839
+ -METR&amp;#039;s report on automating AI R&amp;amp;D https://x.com/METR_Evals/status/1860061711849652378
+ -Evaluating Neuroscience results https://medicalxpress.com/news/2024-11-ai-neuroscience-results-human-experts.html
+ -Math https://x.com/robertghrist/status/1841462507543949581?t=5zV3VpQI0mbrSU9_QRtfkQ&amp;amp;s=19
+ */
+ 
  ====== Capabilities of state-of-the-art AI, 2024 ======
  
  This is a list of some noteworthy capabilities of current state-of-the-art AI in various categories. Last updated 1/24/2024
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Examples of AI systems producing unconventional solutions</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/examples_of_ai_systems_producing_unconventional_solutions?rev=1663745861&amp;do=diff"/>
        <published>2022-09-21T07:37:41+00:00</published>
        <updated>2022-09-21T07:37:41+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/examples_of_ai_systems_producing_unconventional_solutions?rev=1663745861&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -1 +1,31 @@
+ ====== Examples of AI systems producing unconventional solutions ======
+ 
+ // Published 11 February, 2018 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This page lists examples of AI systems producing solutions of an unexpected nature, whether due to goal misspecification or successful optimization.  This list is highly incomplete.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://blog.openai.com/faulty-reward-functions/&amp;quot;&amp;gt;CoastRunners’ burning boat&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://www.damninteresting.com/on-the-origin-of-circuits/&amp;quot;&amp;gt;Incomprehensible evolved logic gates&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol&amp;quot;&amp;gt;AlphaGo’s inhuman moves&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://www.usatoday.com/story/tech/news/2017/12/07/california-fires-navigation-apps-like-waze-sent-commuters-into-flames-drivers/930904001/&amp;quot;&amp;gt;Waze direction into fires&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1 +1,31 @@
+ ====== Examples of AI systems producing unconventional solutions ======
+ 
+ // Published 11 February, 2018 //
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;p&amp;gt;This page lists examples of AI systems producing solutions of an unexpected nature, whether due to goal misspecification or successful optimization.  This list is highly incomplete.&amp;lt;/p&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
+ ===== List =====
+ 
+ 
+ &amp;lt;HTML&amp;gt;
+ &amp;lt;ol&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://blog.openai.com/faulty-reward-functions/&amp;quot;&amp;gt;CoastRunners’ burning boat&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://www.damninteresting.com/on-the-origin-of-circuits/&amp;quot;&amp;gt;Incomprehensible evolved logic gates&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol&amp;quot;&amp;gt;AlphaGo’s inhuman moves&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;
+ &amp;lt;a href=&amp;quot;https://www.usatoday.com/story/tech/news/2017/12/07/california-fires-navigation-apps-like-waze-sent-commuters-into-flames-drivers/930904001/&amp;quot;&amp;gt;Waze direction into fires&amp;lt;/a&amp;gt;
+ &amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
+ &amp;lt;/ol&amp;gt;
+ &amp;lt;/HTML&amp;gt;
+ 
+ 
  

&lt;/pre&gt;</summary>
    </entry>
</feed>
