This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/23 18:27] rickkorzekwa |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Views of prominent AI developers on risk from AI ====== | ====== Views of prominent AI developers on risk from AI ====== | ||
+ | |||
+ | //This page is in an early draft. It is incomplete and may contain errors.// | ||
People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | ||
Line 27: | Line 29: | ||
Prof. Bengio signed [[https:// | Prof. Bengio signed [[https:// | ||
- | |||
- | ==== Gary Marcus, New York University, Professor ==== | ||
- | |||
- | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | ||
- | |||
- | > I don’t completely discount [the AI control problem and recursive improvement], | ||
- | |||
- | // | ||
- | |||
- | Prof. Marcus signed [[https:// | ||
==== Stuart Russel, UC Berkeley, Professor ==== | ==== Stuart Russel, UC Berkeley, Professor ==== | ||
Line 55: | Line 47: | ||
Prof. Russel signed [[https:// | Prof. Russel signed [[https:// | ||
+ | |||
+ | ==== Gary Marcus, New York University, Professor ==== | ||
+ | |||
+ | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | ||
+ | |||
+ | > I don’t completely discount [the AI control problem and recursive improvement], | ||
+ | |||
+ | // | ||
+ | |||
+ | Prof. Marcus signed [[https:// | ||
===== AI Labs leaders & researchers ===== | ===== AI Labs leaders & researchers ===== | ||
Line 118: | Line 120: | ||
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ||
- | > I think there are two existential threats | + | >If I were advising governments, |
+ | |||
+ | //Financial Times [[https:// | ||
+ | |||
+ | > There' | ||
+ | |||
+ | //NPR [[https:// | ||
+ | |||
+ | >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize | ||
- | > I hope the rewards | + | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. |
- | // | + | // |
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== |