User Tools

Site Tools


arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/07/18 16:58]
harlanstewart
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/07/18 16:58]
harlanstewart
Line 126: Line 126:
 >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more control, we’re in trouble. >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more control, we’re in trouble.
  
->And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.+>And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.
  
 //[[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/|EmTech Digital 2023]], May 2023// //[[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/|EmTech Digital 2023]], May 2023//
arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.txt · Last modified: 2024/03/09 18:18 by katjagrace