Both sides previous revision
Previous revision
Next revision
|
Previous revision
|
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/23 18:26] rickkorzekwa |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
====== Views of prominent AI developers on risk from AI ====== | ====== Views of prominent AI developers on risk from AI ====== |
| |
| //This page is in an early draft. It is incomplete and may contain errors.// |
| |
People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. |
//Financial Times [[https://www.ft.com/content/b4baa678-b389-4acf-9438-24ccbcd4f201|interview]], May 2023// | //Financial Times [[https://www.ft.com/content/b4baa678-b389-4acf-9438-24ccbcd4f201|interview]], May 2023// |
| |
Yoshua signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. | Prof. Bengio signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. |
| |
==== Gary Marcus, New York University, Professor ==== | |
| |
> I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | |
| |
> I don’t completely discount [the AI control problem and recursive improvement], I’m not going to say the probability is zero but the probability of it happening anytime soon is pretty low. There was recently a video circulated of robots opening doorknobs, and that’s about where they are in development. | |
| |
//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pages 330-331, Martin Ford, 2018// | |
| |
Gary signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. | |
| |
==== Stuart Russel, UC Berkeley, Professor ==== | ==== Stuart Russel, UC Berkeley, Professor ==== |
//[[https://www.youtube.com/watch?v=ISkAkiAkK7A|Lecture]], April 5, 2023// | //[[https://www.youtube.com/watch?v=ISkAkiAkK7A|Lecture]], April 5, 2023// |
| |
Stuart signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. | Prof. Russel signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. |
| |
| ==== Gary Marcus, New York University, Professor ==== |
| |
| > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. |
| |
| > I don’t completely discount [the AI control problem and recursive improvement], I’m not going to say the probability is zero but the probability of it happening anytime soon is pretty low. There was recently a video circulated of robots opening doorknobs, and that’s about where they are in development. |
| |
| //[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pages 330-331, Martin Ford, 2018// |
| |
| Prof. Marcus signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. |
| |
===== AI Labs leaders & researchers ===== | ===== AI Labs leaders & researchers ===== |
//[[https://www.youtube.com/watch?v=ebjkD1Om4uw&t=1257s|StrictlyVC Interview]], January 17, 2023// | //[[https://www.youtube.com/watch?v=ebjkD1Om4uw&t=1257s|StrictlyVC Interview]], January 17, 2023// |
| |
Sam signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. | Mr. Altman signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. |
| |
==== Dario Amodei, CEO, Anthropic ==== | ==== Dario Amodei, CEO, Anthropic ==== |
//[[https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/|80,000 Hours Podcast]], July, 2017// | //[[https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/|80,000 Hours Podcast]], July, 2017// |
| |
Dario signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. | Dr. Amodei signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems. |
| |
==== Greg Brockman, Co-Founder and President, OpenAI ==== | ==== Greg Brockman, Co-Founder and President, OpenAI ==== |
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== |
| |
> I think there are two existential threats that are much bigger than AI. One is global nuclear war, and the other is [synthetic biology]. I think that's what people should be worried about, not ultra-intelligent systems. | >If I were advising governments, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next 20 years. I think that would be a reasonable number |
| |
| //Financial Times [[https://www.ft.com/content/c64592ac-a62f-4e8e-b99b-08c869c83f4b|interview]], Feb 2024// |
| |
| > There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control |
| |
| //NPR [[https://www.npr.org/2023/05/28/1178673070/the-godfather-of-ai-sounds-alarm-about-potential-dangers-of-ai|interview]], May 2023// |
| |
| >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more control, we’re in trouble. |
| |
> I hope the rewards [of AI] will outweigh the downsides, but I don't know whether they will, and that's an issue of social systems, not with the technology. | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. |
| |
//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pp 96-97, Martin Ford, 2018// | //[[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/|EmTech Digital 2023]], May 2023// |
| |
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== |