This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/26 15:28] katjagrace |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Views of prominent AI developers on risk from AI ====== | ====== Views of prominent AI developers on risk from AI ====== | ||
+ | |||
+ | //This page is in an early draft. It is incomplete and may contain errors.// | ||
People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | ||
Line 118: | Line 120: | ||
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ||
- | > I think there are two existential threats | + | >If I were advising governments, |
+ | |||
+ | //Financial Times [[https:// | ||
+ | |||
+ | > There' | ||
+ | |||
+ | //NPR [[https:// | ||
+ | |||
+ | >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize | ||
- | > I hope the rewards | + | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. |
- | // | + | // |
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== |