This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/23 18:09] rickkorzekwa Fixed hyperlinks |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Views of prominent AI developers on risk from AI ====== | ====== Views of prominent AI developers on risk from AI ====== | ||
+ | |||
+ | //This page is in an early draft. It is incomplete and may contain errors.// | ||
People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | ||
Line 20: | Line 22: | ||
// | // | ||
- | ==== Gary Marcus, New York University, Professor ==== | + | > [Prof. Bengio] noted that disagreement among AI experts was an important signal to the public that science did not have the answers as of yet. “If we disagree it means we don’t know . . . if it could be dangerous. And if we don’t know, it means we must act to protect ourselves,” said Bengio. |
+ | > | ||
+ | > “If you want humanity and society to survive these challenges, we can’t have the competition between people, companies, countries — and a very weak international co-ordination, | ||
- | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | + | //Financial Times [[https:// |
- | > I don’t completely discount [the AI control problem and recursive improvement], | + | Prof. Bengio signed |
- | + | ||
- | //[[https://www.google.com/books/edition/_/ | + | |
==== Stuart Russel, UC Berkeley, Professor ==== | ==== Stuart Russel, UC Berkeley, Professor ==== | ||
Line 43: | Line 45: | ||
// | // | ||
+ | |||
+ | Prof. Russel signed [[https:// | ||
+ | |||
+ | ==== Gary Marcus, New York University, Professor ==== | ||
+ | |||
+ | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | ||
+ | |||
+ | > I don’t completely discount [the AI control problem and recursive improvement], | ||
+ | |||
+ | // | ||
+ | |||
+ | Prof. Marcus signed [[https:// | ||
===== AI Labs leaders & researchers ===== | ===== AI Labs leaders & researchers ===== | ||
Line 57: | Line 71: | ||
// | // | ||
+ | |||
+ | Mr. Altman signed [[https:// | ||
==== Dario Amodei, CEO, Anthropic ==== | ==== Dario Amodei, CEO, Anthropic ==== | ||
Line 69: | Line 85: | ||
// | // | ||
+ | |||
+ | Dr. Amodei signed [[https:// | ||
==== Greg Brockman, Co-Founder and President, OpenAI ==== | ==== Greg Brockman, Co-Founder and President, OpenAI ==== | ||
Line 102: | Line 120: | ||
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ||
- | > I think there are two existential threats | + | >If I were advising governments, |
+ | |||
+ | //Financial Times [[https:// | ||
+ | |||
+ | > There' | ||
+ | |||
+ | //NPR [[https:// | ||
+ | |||
+ | >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize | ||
- | > I hope the rewards | + | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. |
- | // | + | // |
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== |