This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/07/18 16:38] harlanstewart |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
||
---|---|---|---|
Line 119: | Line 119: | ||
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ||
+ | |||
+ | >If I were advising governments, | ||
+ | |||
+ | //Financial Times [[https:// | ||
> There' | > There' | ||
//NPR [[https:// | //NPR [[https:// | ||
+ | |||
+ | >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more control, we’re in trouble. | ||
+ | |||
+ | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. | ||
+ | |||
+ | // | ||
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== |