<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://wiki.aiimpacts.org/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>AI Impacts Wiki</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/"/>
    <id>https://wiki.aiimpacts.org/</id>
    <updated>2026-04-21T08:55:37+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://wiki.aiimpacts.org/feed.php" />
    <entry>
        <title>2023 Expert Survey on Progress in AI</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai?rev=1753813356&amp;do=diff"/>
        <published>2025-07-29T18:22:36+00:00</published>
        <updated>2025-07-29T18:22:36+00:00</updated>
        <id>https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai?rev=1753813356&amp;do=diff</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys" />
        <content>&lt;pre&gt;
@@ -27,9 +27,9 @@
  We collected the names of authors who published in 2022 at a selection of top-tier machine learning conferences (NeurIPS, ICML, ICLR, AAAI, JMLR, and IJCAI), and were able to find email addresses for 20,066 (92%) of them. The resulting list of emails was put into a random order based on a random unique number assigned using Google Sheets&amp;#039; &amp;quot;Randomize range&amp;quot; feature. The first 1003 emails from the randomly ordered list (about 5% of the total) were assigned to a pilot study group to receive payment for participating, and the second 1003 emails from the list were assigned to a pilot study group to not receive payment for participating. The remainder of the emails were assigned to the main survey group
  
  The pilot study took place from October 11 to October 15 in 2023. Based on the response rates in the paid group versus the unpaid group, we decided to offer payment to all survey participants. A $50 reward will be issued through a third-party service. Depending on a participant&amp;#039;s country (as determined by IP address), participants will be able to use the third-party service to choose between a gift card, a pre-paid Mastercard, and a donation to their choice of 15 charities.
  
- On October 15, the survey was sent to the main survey group. The survey remained open until October 24, 2023. Out of the 20,066 emails we contacted, 1,607 (8%) bounced or failed, leaving 18,459 functioning email addresses. We received 2,778 responses, for a response rate of 15%. 95% of these responses were deemed ‘finished’ by Qualtrics.
+ On October 15, the survey was sent to the main survey group. The survey remained open until October 24, 2023. Out of the 20,066 emails we contacted, 1,607 (8%) bounced or failed, leaving 18,459 functioning email addresses. We received 2,778 responses (where the person answered at least one question), for a response rate of 15%. 95% of these responses were deemed ‘finished’ by Qualtrics, which appears to correspond to the participant seeing the last question.
  
  === Changes from 2016 and 2022 ESPAI surveys ===
  
  These are some notable differences from the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]]

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -27,9 +27,9 @@
  We collected the names of authors who published in 2022 at a selection of top-tier machine learning conferences (NeurIPS, ICML, ICLR, AAAI, JMLR, and IJCAI), and were able to find email addresses for 20,066 (92%) of them. The resulting list of emails was put into a random order based on a random unique number assigned using Google Sheets&amp;#039; &amp;quot;Randomize range&amp;quot; feature. The first 1003 emails from the randomly ordered list (about 5% of the total) were assigned to a pilot study group to receive payment for participating, and the second 1003 emails from the list were assigned to a pilot study group to not receive payment for participating. The remainder of the emails were assigned to the main survey group
  
  The pilot study took place from October 11 to October 15 in 2023. Based on the response rates in the paid group versus the unpaid group, we decided to offer payment to all survey participants. A $50 reward will be issued through a third-party service. Depending on a participant&amp;#039;s country (as determined by IP address), participants will be able to use the third-party service to choose between a gift card, a pre-paid Mastercard, and a donation to their choice of 15 charities.
  
- On October 15, the survey was sent to the main survey group. The survey remained open until October 24, 2023. Out of the 20,066 emails we contacted, 1,607 (8%) bounced or failed, leaving 18,459 functioning email addresses. We received 2,778 responses, for a response rate of 15%. 95% of these responses were deemed ‘finished’ by Qualtrics.
+ On October 15, the survey was sent to the main survey group. The survey remained open until October 24, 2023. Out of the 20,066 emails we contacted, 1,607 (8%) bounced or failed, leaving 18,459 functioning email addresses. We received 2,778 responses (where the person answered at least one question), for a response rate of 15%. 95% of these responses were deemed ‘finished’ by Qualtrics, which appears to correspond to the participant seeing the last question.
  
  === Changes from 2016 and 2022 ESPAI surveys ===
  
  These are some notable differences from the [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2022_expert_survey_on_progress_in_ai|2022 Expert Survey on Progress in AI]]

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Capabilities of state-of-the-art AI, 2024</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/uncategorized/capabilities_of_sota_ai?rev=1733867458&amp;do=diff"/>
        <published>2024-12-10T21:50:58+00:00</published>
        <updated>2024-12-10T21:50:58+00:00</updated>
        <id>https://wiki.aiimpacts.org/uncategorized/capabilities_of_sota_ai?rev=1733867458&amp;do=diff</id>
        <author>
            <name>harlanstewart</name>
            <email>harlanstewart@undisclosed.example.com</email>
        </author>
        <category  term="uncategorized" />
        <content>&lt;pre&gt;
@@ -1,4 +1,18 @@
+ /*Editor&amp;#039;s note:
+ Some things to add to this page, if someone wants to update it at some point:
+ -GPT-4o advanced voice mode
+ -GDM GenCast SOTA weather forecasting
+ -Sora
+ -o1 reasoning abilities
+ -Genie 2 and GameNGen
+ -Hacking milestone from Google&amp;#039;s Big Sleep
+ -Forecasting capabilities https://arxiv.org/abs/2409.19839
+ -METR&amp;#039;s report on automating AI R&amp;amp;D https://x.com/METR_Evals/status/1860061711849652378
+ -Evaluating Neuroscience results https://medicalxpress.com/news/2024-11-ai-neuroscience-results-human-experts.html
+ -Math https://x.com/robertghrist/status/1841462507543949581?t=5zV3VpQI0mbrSU9_QRtfkQ&amp;amp;s=19
+ */
+ 
  ====== Capabilities of state-of-the-art AI, 2024 ======
  
  This is a list of some noteworthy capabilities of current state-of-the-art AI in various categories. Last updated 1/24/2024
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -1,4 +1,18 @@
+ /*Editor&amp;#039;s note:
+ Some things to add to this page, if someone wants to update it at some point:
+ -GPT-4o advanced voice mode
+ -GDM GenCast SOTA weather forecasting
+ -Sora
+ -o1 reasoning abilities
+ -Genie 2 and GameNGen
+ -Hacking milestone from Google&amp;#039;s Big Sleep
+ -Forecasting capabilities https://arxiv.org/abs/2409.19839
+ -METR&amp;#039;s report on automating AI R&amp;amp;D https://x.com/METR_Evals/status/1860061711849652378
+ -Evaluating Neuroscience results https://medicalxpress.com/news/2024-11-ai-neuroscience-results-human-experts.html
+ -Math https://x.com/robertghrist/status/1841462507543949581?t=5zV3VpQI0mbrSU9_QRtfkQ&amp;amp;s=19
+ */
+ 
  ====== Capabilities of state-of-the-art AI, 2024 ======
  
  This is a list of some noteworthy capabilities of current state-of-the-art AI in various categories. Last updated 1/24/2024
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>How much computing capacity exists in GPUs and TPUs in Q1 2023?</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/ai_timelines/hardware_and_ai_timelines/computing_capacity_of_all_gpus_and_tpus?rev=1733866423&amp;do=diff"/>
        <published>2024-12-10T21:33:43+00:00</published>
        <updated>2024-12-10T21:33:43+00:00</updated>
        <id>https://wiki.aiimpacts.org/ai_timelines/hardware_and_ai_timelines/computing_capacity_of_all_gpus_and_tpus?rev=1733866423&amp;do=diff</id>
        <author>
            <name>harlanstewart</name>
            <email>harlanstewart@undisclosed.example.com</email>
        </author>
        <category  term="ai_timelines:hardware_and_ai_timelines" />
        <content>&lt;pre&gt;
@@ -4,8 +4,10 @@
  */
  ====== How much computing capacity exists in GPUs and TPUs in Q1 2023? ======
  
  //Published 3 April 2023, last updated 3 April 2023//
+ 
+ (**Dec 10 2024 Update: **This analysis did not consider typical versus maximum performance in computing hardware. The data and figures presented here are likely based on maximum performance.)
  
  A back-of-the-envelope calculation based on market size, price-performance, hardware lifespan estimates, and the sizes of Google’s data centers estimates that there is around 3.98 * 10^21 FLOP/s of computing capacity on GPUs and TPUs as of Q1 2023.
  
  ===== Details =====

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -4,8 +4,10 @@
  */
  ====== How much computing capacity exists in GPUs and TPUs in Q1 2023? ======
  
  //Published 3 April 2023, last updated 3 April 2023//
+ 
+ (**Dec 10 2024 Update: **This analysis did not consider typical versus maximum performance in computing hardware. The data and figures presented here are likely based on maximum performance.)
  
  A back-of-the-envelope calculation based on market size, price-performance, hardware lifespan estimates, and the sizes of Google’s data centers estimates that there is around 3.98 * 10^21 FLOP/s of computing capacity on GPUs and TPUs as of Q1 2023.
  
  ===== Details =====

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Argument for AI x-risk from competent non-aligned agents - [Argument] </title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_competent_non-aligned_agents/start?rev=1727368270&amp;do=diff"/>
        <published>2024-09-26T16:31:10+00:00</published>
        <updated>2024-09-26T16:31:10+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_competent_non-aligned_agents/start?rev=1727368270&amp;do=diff</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents" />
        <content>&lt;pre&gt;
@@ -12,9 +12,9 @@
  
  **Assumptions**:
  
    - **Superhuman AI**: humanity will at some point develop AI systems at least as capable as any human at approximately all tasks, and substantially better at some tasks—call this ‘superhuman AI’. //(Main article: [[will_superhuman_ai_be_created/start|Will superhuman AI be created?]])//
-   - **Inaction**: no further special action will be taken to mitigate existential risk from superhuman AI systems. (This argument is about the default scenario without such efforts, because it is intended to inform decisions about applying these efforts, not because such efforts are unlikely.)
+   - **Inaction**: no further special action will be taken to mitigate existential risk from superhuman AI systems. (I.e. this argument is about the default scenario without such efforts. This is because it is intended to inform decisions about applying these efforts, not because such efforts are unlikely.)
  
  ==== I. If superhuman AI is developed, then at least some superhuman AI systems are likely to be goal-directed ====
  
  //(Main article: [[arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents:will_advanced_ai_be_agentic:start|Will advanced AI be agentic?]])//

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -12,9 +12,9 @@
  
  **Assumptions**:
  
    - **Superhuman AI**: humanity will at some point develop AI systems at least as capable as any human at approximately all tasks, and substantially better at some tasks—call this ‘superhuman AI’. //(Main article: [[will_superhuman_ai_be_created/start|Will superhuman AI be created?]])//
-   - **Inaction**: no further special action will be taken to mitigate existential risk from superhuman AI systems. (This argument is about the default scenario without such efforts, because it is intended to inform decisions about applying these efforts, not because such efforts are unlikely.)
+   - **Inaction**: no further special action will be taken to mitigate existential risk from superhuman AI systems. (I.e. this argument is about the default scenario without such efforts. This is because it is intended to inform decisions about applying these efforts, not because such efforts are unlikely.)
  
  ==== I. If superhuman AI is developed, then at least some superhuman AI systems are likely to be goal-directed ====
  
  //(Main article: [[arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents:will_advanced_ai_be_agentic:start|Will advanced AI be agentic?]])//

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Will Superhuman AI be created?</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/will_superhuman_ai_be_created/start?rev=1726523766&amp;do=diff"/>
        <published>2024-09-16T21:56:06+00:00</published>
        <updated>2024-09-16T21:56:06+00:00</updated>
        <id>https://wiki.aiimpacts.org/will_superhuman_ai_be_created/start?rev=1726523766&amp;do=diff</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="will_superhuman_ai_be_created" />
        <content>&lt;pre&gt;
@@ -25,9 +25,9 @@
  
  ==== Arguments ====
  
  
- === A. Superhuman AI is very likely to be physically possible ===
+ === A. Superhuman AI is very likely to be physically feasible at some point in time ===
  
  
  == 1. Human brains prove that it is physically possible to create human-level intelligence ==
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -25,9 +25,9 @@
  
  ==== Arguments ====
  
  
- === A. Superhuman AI is very likely to be physically possible ===
+ === A. Superhuman AI is very likely to be physically feasible at some point in time ===
  
  
  == 1. Human brains prove that it is physically possible to create human-level intelligence ==
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>Will advanced AI be agentic?</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_competent_non-aligned_agents/will_advanced_ai_be_agentic/start?rev=1726502072&amp;do=diff"/>
        <published>2024-09-16T15:54:32+00:00</published>
        <updated>2024-09-16T15:54:32+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_competent_non-aligned_agents/will_advanced_ai_be_agentic/start?rev=1726502072&amp;do=diff</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents:will_advanced_ai_be_agentic" />
        <content>&lt;pre&gt;
@@ -5,6 +5,6 @@
  Reasons to expect that some superhuman AI systems will be goal-directed include:
  
    - **Some goal-directed behavior is likely to be [[arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents:will_advanced_ai_be_agentic:how_large_are_economic_incentives_for_agentic_ai:start|economically valuable to create]]** (i.e. also not replaceable using only non-goal-directed systems). This appears to be true even for [[arguments_for_ai_risk:incentives_to_create_ai_systems_known_to_pose_extinction_risks|apparently x-risky systems]], and will likely [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_dangerous_ai_systems_appear_safe|appear true]] more often than it is.
    - **Goal-directed entities [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:will_advanced_ai_be_agentic:will_mesaoptimization_produce_misalignment|may tend to arise]]** from machine learning training processes not intending to create them.
-   - **‘[[agency:what_do_coherence_arguments_imply_about_the_behavior_of_advanced_ai|Coherence arguments]]‘** may imply that systems with some goal-directedness will **become more strongly goal-directed over time**.
+   - **&amp;#039;[[agency:what_do_coherence_arguments_imply_about_the_behavior_of_advanced_ai|Coherence arguments]]&amp;#039;** may imply that systems with some goal-directedness will **become more strongly goal-directed over time**.
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -5,6 +5,6 @@
  Reasons to expect that some superhuman AI systems will be goal-directed include:
  
    - **Some goal-directed behavior is likely to be [[arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:argument_for_ai_x-risk_from_competent_non-aligned_agents:will_advanced_ai_be_agentic:how_large_are_economic_incentives_for_agentic_ai:start|economically valuable to create]]** (i.e. also not replaceable using only non-goal-directed systems). This appears to be true even for [[arguments_for_ai_risk:incentives_to_create_ai_systems_known_to_pose_extinction_risks|apparently x-risky systems]], and will likely [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_dangerous_ai_systems_appear_safe|appear true]] more often than it is.
    - **Goal-directed entities [[arguments_for_ai_risk:is_ai_an_existential_threat_to_humanity:will_malign_ai_agents_control_the_future:argument_for_ai_x-risk_from_competent_malign_agents:will_advanced_ai_be_agentic:will_mesaoptimization_produce_misalignment|may tend to arise]]** from machine learning training processes not intending to create them.
-   - **‘[[agency:what_do_coherence_arguments_imply_about_the_behavior_of_advanced_ai|Coherence arguments]]‘** may imply that systems with some goal-directedness will **become more strongly goal-directed over time**.
+   - **&amp;#039;[[agency:what_do_coherence_arguments_imply_about_the_behavior_of_advanced_ai|Coherence arguments]]&amp;#039;** may imply that systems with some goal-directedness will **become more strongly goal-directed over time**.
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>List of arguments that AI poses an existential risk - old revision restored (2024/08/13 17:51)</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/start?rev=1723756760&amp;do=diff"/>
        <published>2024-08-15T21:19:20+00:00</published>
        <updated>2024-08-15T21:19:20+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/start?rev=1723756760&amp;do=diff</id>
        <author>
            <name>nathanpmyoung</name>
            <email>nathanpmyoung@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;pre&gt;
@@ -189,10 +189,8 @@
  
  ----
  
  ===== Expert opinion =====
- 
- [{{ :arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:icml.jpg?300|NeurIPS is a machine learning conference attended by many AI researchers. What should our response be to the median view of these people? }}]
  
  Summary:
    - The people best placed to judge the extent of existential risk from AI are AI researchers, forecasting experts, experts on AI risk, relevant social scientists, and some others
    - Median members of these groups frequently put substantial credence (e.g. 5%) on human extinction or similar disempowerment from AI
@@ -201,8 +199,9 @@
  Selected counterarguments:
    * Most of these groups do not have demonstrated skill at forecasting, and to our knowledge none have demonstrated skill at forecasting speculative events more than 5 years into the future
  
  
+ [{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:ce8dfb8d-83ec-453b-8dd2-3609565338b7_2544x1274.png?800|800 randomly selected responses from our [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai|2023 Expert Survey on Progress in AI]] on how good or bad they expect the long-run impacts of &amp;#039;high level machine intelligence&amp;#039; to be on the future of humanity. Each vertical bar represents one participant&amp;#039;s guess. The black section of each bar is the probability that participant put on &amp;#039;extremely bad (e.g. human extinction)&amp;#039;.}}]
  
  
  ----
  

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -189,10 +189,8 @@
  
  ----
  
  ===== Expert opinion =====
- 
- [{{ :arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:icml.jpg?300|NeurIPS is a machine learning conference attended by many AI researchers. What should our response be to the median view of these people? }}]
  
  Summary:
    - The people best placed to judge the extent of existential risk from AI are AI researchers, forecasting experts, experts on AI risk, relevant social scientists, and some others
    - Median members of these groups frequently put substantial credence (e.g. 5%) on human extinction or similar disempowerment from AI
@@ -201,8 +199,9 @@
  Selected counterarguments:
    * Most of these groups do not have demonstrated skill at forecasting, and to our knowledge none have demonstrated skill at forecasting speculative events more than 5 years into the future
  
  
+ [{{:ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:ce8dfb8d-83ec-453b-8dd2-3609565338b7_2544x1274.png?800|800 randomly selected responses from our [[ai_timelines:predictions_of_human-level_ai_timelines:ai_timeline_surveys:2023_expert_survey_on_progress_in_ai|2023 Expert Survey on Progress in AI]] on how good or bad they expect the long-run impacts of &amp;#039;high level machine intelligence&amp;#039; to be on the future of humanity. Each vertical bar represents one participant&amp;#039;s guess. The black section of each bar is the probability that participant put on &amp;#039;extremely bad (e.g. human extinction)&amp;#039;.}}]
  
  
  ----
  

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:emerson_moog.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aemerson_moog.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723755771&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-15T21:02:51+00:00</published>
        <updated>2024-08-15T21:02:51+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aemerson_moog.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723755771&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>nathanpmyoung</name>
            <email>nathanpmyoung@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/emerson_moog.jpg?w=300&amp;h=239&amp;t=1723755771&amp;amp;tok=1ec755&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:emerson_moog.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/emerson_moog.jpg?w=300&amp;h=239&amp;t=1723755771&amp;amp;tok=1ec755&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:emerson_moog.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:icml.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aicml.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723753167&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-15T20:19:27+00:00</published>
        <updated>2024-08-15T20:19:27+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aicml.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723753167&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>nathanpmyoung</name>
            <email>nathanpmyoung@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/icml.jpg?w=300&amp;h=225&amp;t=1723753167&amp;amp;tok=8c1498&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:icml.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/icml.jpg?w=300&amp;h=225&amp;t=1723753167&amp;amp;tok=8c1498&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:icml.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:1893_nina_pinta_santa_maria_replicas.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3A1893_nina_pinta_santa_maria_replicas.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723571373&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-13T17:49:33+00:00</published>
        <updated>2024-08-13T17:49:33+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3A1893_nina_pinta_santa_maria_replicas.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723571373&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/1893_nina_pinta_santa_maria_replicas.jpg?w=300&amp;h=187&amp;t=1723571373&amp;amp;tok=88ee64&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:1893_nina_pinta_santa_maria_replicas.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/1893_nina_pinta_santa_maria_replicas.jpg?w=300&amp;h=187&amp;t=1723571373&amp;amp;tok=88ee64&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:1893_nina_pinta_santa_maria_replicas.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>Argument for AI x-risk from large impacts - [Discussions of this argument elsewhere] </title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_large_impacts?rev=1723567759&amp;do=diff"/>
        <published>2024-08-13T16:49:19+00:00</published>
        <updated>2024-08-13T16:49:19+00:00</updated>
        <id>https://wiki.aiimpacts.org/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/argument_for_ai_x-risk_from_large_impacts?rev=1723567759&amp;do=diff</id>
        <author>
            <name>nathanpmyoung</name>
            <email>nathanpmyoung@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;pre&gt;
@@ -71,9 +71,9 @@
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;80,000 Hours. “How Sure Are We about This AI Stuff?” Accessed September 16, 2020. &amp;lt;a href=&amp;quot;https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/&amp;quot;&amp;gt;https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/&amp;lt;/a&amp;gt;.&amp;lt;/em&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  
  &amp;lt;/ul&amp;gt;
  &amp;lt;/HTML&amp;gt;
- ==== Discussions of this argument elsewhere ====
+ === Discussions of this argument elsewhere ===
  
  Ben Garfinkle ([[https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#Three_concrete_cases|2019]])
  
  &amp;gt;There are three concepts underpinning this argument:
@@ -86,11 +86,8 @@
  &amp;gt;
  
  &amp;gt;If we&amp;#039;re looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them.
  &amp;gt;
- 
- 
- 
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;Richard Ngo describes this as follows&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-2661&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-2661&amp;quot; title=&amp;#039;Ngo, Richard. “Thinking Complete: Disentangling Arguments for the Importance of AI Safety.” &amp;amp;lt;em&amp;amp;gt;Thinking Complete&amp;amp;lt;/em&amp;amp;gt; (blog), January 21, 2019. &amp;amp;lt;a href=&amp;quot;http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html&amp;quot;&amp;amp;gt;http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html&amp;amp;lt;/a&amp;amp;gt;. &amp;amp;lt;br&amp;amp;gt;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;:&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;

&lt;/pre&gt;</content>
        <summary>&lt;pre&gt;
@@ -71,9 +71,9 @@
  &amp;lt;li&amp;gt;&amp;lt;div class=&amp;quot;li&amp;quot;&amp;gt;&amp;lt;em&amp;gt;80,000 Hours. “How Sure Are We about This AI Stuff?” Accessed September 16, 2020. &amp;lt;a href=&amp;quot;https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/&amp;quot;&amp;gt;https://80000hours.org/podcast/episodes/ben-garfinkel-classic-ai-risk-arguments/&amp;lt;/a&amp;gt;.&amp;lt;/em&amp;gt;&amp;lt;/div&amp;gt;&amp;lt;/li&amp;gt;
  
  &amp;lt;/ul&amp;gt;
  &amp;lt;/HTML&amp;gt;
- ==== Discussions of this argument elsewhere ====
+ === Discussions of this argument elsewhere ===
  
  Ben Garfinkle ([[https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff#Three_concrete_cases|2019]])
  
  &amp;gt;There are three concepts underpinning this argument:
@@ -86,11 +86,8 @@
  &amp;gt;
  
  &amp;gt;If we&amp;#039;re looking at technologies that are likely to make especially large changes, then AI stands out as especially promising among them.
  &amp;gt;
- 
- 
- 
  
  &amp;lt;HTML&amp;gt;
  &amp;lt;p&amp;gt;Richard Ngo describes this as follows&amp;lt;span class=&amp;quot;easy-footnote-margin-adjust&amp;quot; id=&amp;quot;easy-footnote-1-2661&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;&amp;lt;span class=&amp;quot;easy-footnote&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#easy-footnote-bottom-1-2661&amp;quot; title=&amp;#039;Ngo, Richard. “Thinking Complete: Disentangling Arguments for the Importance of AI Safety.” &amp;amp;lt;em&amp;amp;gt;Thinking Complete&amp;amp;lt;/em&amp;amp;gt; (blog), January 21, 2019. &amp;amp;lt;a href=&amp;quot;http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html&amp;quot;&amp;amp;gt;http://thinkingcomplete.blogspot.com/2019/01/disentangling-arguments-for-importance.html&amp;amp;lt;/a&amp;amp;gt;. &amp;amp;lt;br&amp;amp;gt;&amp;#039;&amp;gt;&amp;lt;sup&amp;gt;1&amp;lt;/sup&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/span&amp;gt;:&amp;lt;/p&amp;gt;
  &amp;lt;/HTML&amp;gt;

&lt;/pre&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:54250873_max.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3A54250873_max.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723080593&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-08T01:29:53+00:00</published>
        <updated>2024-08-08T01:29:53+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3A54250873_max.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1723080593&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/54250873_max.jpg?w=300&amp;h=217&amp;t=1723080593&amp;amp;tok=327a2c&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:54250873_max.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/54250873_max.jpg?w=300&amp;h=217&amp;t=1723080593&amp;amp;tok=327a2c&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:54250873_max.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:learn-rabbits-at-water-hole-11145789_0.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Alearn-rabbits-at-water-hole-11145789_0.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722998483&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-07T02:41:23+00:00</published>
        <updated>2024-08-07T02:41:23+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Alearn-rabbits-at-water-hole-11145789_0.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722998483&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/learn-rabbits-at-water-hole-11145789_0.jpg?w=300&amp;h=250&amp;t=1722998483&amp;amp;tok=3022a1&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:learn-rabbits-at-water-hole-11145789_0.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/learn-rabbits-at-water-hole-11145789_0.jpg?w=300&amp;h=250&amp;t=1722998483&amp;amp;tok=3022a1&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:learn-rabbits-at-water-hole-11145789_0.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:screen_shot_2024-08-05_at_6.20.23_pm.png - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Ascreen_shot_2024-08-05_at_6.20.23_pm.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722907326&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-06T01:22:06+00:00</published>
        <updated>2024-08-06T01:22:06+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Ascreen_shot_2024-08-05_at_6.20.23_pm.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722907326&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/screen_shot_2024-08-05_at_6.20.23_pm.png?w=300&amp;h=147&amp;t=1722907326&amp;amp;tok=2fe178&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:screen_shot_2024-08-05_at_6.20.23_pm.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/screen_shot_2024-08-05_at_6.20.23_pm.png?w=300&amp;h=147&amp;t=1722907326&amp;amp;tok=2fe178&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:screen_shot_2024-08-05_at_6.20.23_pm.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:clinical_trial_for_malaria_treatment_49450846413_.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aclinical_trial_for_malaria_treatment_49450846413_.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722889784&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-05T20:29:44+00:00</published>
        <updated>2024-08-05T20:29:44+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aclinical_trial_for_malaria_treatment_49450846413_.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722889784&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/clinical_trial_for_malaria_treatment_49450846413_.jpg?w=300&amp;h=218&amp;t=1722889784&amp;amp;tok=aadc20&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:clinical_trial_for_malaria_treatment_49450846413_.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/clinical_trial_for_malaria_treatment_49450846413_.jpg?w=300&amp;h=218&amp;t=1722889784&amp;amp;tok=aadc20&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:clinical_trial_for_malaria_treatment_49450846413_.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:katjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Akatjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722889104&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-05T20:18:24+00:00</published>
        <updated>2024-08-05T20:18:24+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Akatjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722889104&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/katjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png?w=300&amp;h=300&amp;t=1722889104&amp;amp;tok=12164c&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:katjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/katjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png?w=300&amp;h=300&amp;t=1722889104&amp;amp;tok=12164c&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:katjagrace_screenshot_of_starcraft_game_fa146243-4bce-4ef4-82fb-fca5baf19370.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:operation_upshot-knothole_-_badger_001.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aoperation_upshot-knothole_-_badger_001.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722493692&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-01T06:28:12+00:00</published>
        <updated>2024-08-01T06:28:12+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Aoperation_upshot-knothole_-_badger_001.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722493692&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/operation_upshot-knothole_-_badger_001.jpg?w=300&amp;h=255&amp;t=1722493692&amp;amp;tok=9bc8c4&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:operation_upshot-knothole_-_badger_001.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/operation_upshot-knothole_-_badger_001.jpg?w=300&amp;h=255&amp;t=1722493692&amp;amp;tok=9bc8c4&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:operation_upshot-knothole_-_badger_001.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:tetrisjs-gameover.png - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Atetrisjs-gameover.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722491935&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-08-01T05:58:55+00:00</published>
        <updated>2024-08-01T05:58:55+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Atetrisjs-gameover.png&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722491935&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/tetrisjs-gameover.png?w=290&amp;h=300&amp;t=1722491935&amp;amp;tok=7bdfb2&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:tetrisjs-gameover.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/tetrisjs-gameover.png?w=290&amp;h=300&amp;t=1722491935&amp;amp;tok=7bdfb2&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:tetrisjs-gameover.png&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:detail_from_the_coronation_of_henry_vi.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Adetail_from_the_coronation_of_henry_vi.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722360338&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-07-30T17:25:38+00:00</published>
        <updated>2024-07-30T17:25:38+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Adetail_from_the_coronation_of_henry_vi.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722360338&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/detail_from_the_coronation_of_henry_vi.jpg?w=300&amp;h=278&amp;t=1722360338&amp;amp;tok=befa2a&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:detail_from_the_coronation_of_henry_vi.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/detail_from_the_coronation_of_henry_vi.jpg?w=300&amp;h=278&amp;t=1722360338&amp;amp;tok=befa2a&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:detail_from_the_coronation_of_henry_vi.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
    <entry>
        <title>arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:the_coronation_of_henry_vi.jpg - created</title>
        <link rel="alternate" type="text/html" href="https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Athe_coronation_of_henry_vi.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722359030&amp;tab_details=history&amp;mediado=diff&amp;do=media"/>
        <published>2024-07-30T17:03:50+00:00</published>
        <updated>2024-07-30T17:03:50+00:00</updated>
        <id>https://wiki.aiimpacts.org/?image=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk%3Athe_coronation_of_henry_vi.jpg&amp;ns=arguments_for_ai_risk%3Alist_of_arguments_that_ai_poses_an_xrisk&amp;rev=1722359030&amp;tab_details=history&amp;mediado=diff&amp;do=media</id>
        <author>
            <name>katjagrace</name>
            <email>katjagrace@undisclosed.example.com</email>
        </author>
        <category  term="arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk" />
        <content>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/the_coronation_of_henry_vi.jpg?w=202&amp;h=300&amp;t=1722359030&amp;amp;tok=56238e&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:the_coronation_of_henry_vi.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</content>
        <summary>&lt;table&gt;&lt;tr&gt;&lt;th width=&quot;50%&quot;&gt;&lt;/th&gt;&lt;th width=&quot;50%&quot;&gt;current&lt;/th&gt;&lt;/tr&gt;&lt;tr align=&quot;center&quot;&gt;&lt;td&gt;&lt;img src=&quot;&quot; alt=&quot;&quot; /&gt;&lt;/td&gt;&lt;td&gt;&lt;img src=&quot;https://wiki.aiimpacts.org/_media/arguments_for_ai_risk/list_of_arguments_that_ai_poses_an_xrisk/the_coronation_of_henry_vi.jpg?w=202&amp;h=300&amp;t=1722359030&amp;amp;tok=56238e&quot; alt=&quot;arguments_for_ai_risk:list_of_arguments_that_ai_poses_an_xrisk:the_coronation_of_henry_vi.jpg&quot; /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;</summary>
    </entry>
</feed>
