User Tools

Site Tools


arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/26 15:28]
katjagrace
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current)
katjagrace [Geoffrey Hinton, 2018 Turing Award recipient]
Line 1: Line 1:
 ====== Views of prominent AI developers on risk from AI ====== ====== Views of prominent AI developers on risk from AI ======
 +
 +//This page is in an early draft. It is incomplete and may contain errors.//
  
 People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.
Line 118: Line 120:
 ==== Geoffrey Hinton, 2018 Turing Award recipient ==== ==== Geoffrey Hinton, 2018 Turing Award recipient ====
  
-> I think there are two existential threats that are much bigger than AIOne is global nuclear war, and the other is [synthetic biology]. I think that's what people should be worried aboutnot ultra-intelligent systems.+>If were advising governments, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next 20 years. I think that would be a reasonable number 
 + 
 +//Financial Times [[https://www.ft.com/content/c64592ac-a62f-4e8e-b99b-08c869c83f4b|interview]]Feb 2024// 
 + 
 +> There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control 
 + 
 +//NPR [[https://www.npr.org/2023/05/28/1178673070/the-godfather-of-ai-sounds-alarm-about-potential-dangers-of-ai|interview]], May 2023// 
 + 
 +>My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more controlwe’re in trouble.
  
-I hope the rewards [of AI] will outweigh the downsidesbut I don'know whether they willand that's an issue of social systemsnot with the technology.+>And if [AI modelsare much smarter than usthey’ll be very good at manipulating us. You won’realize what’s going on... So even if they can’t directly pull leversthey can certainly get us to pull levers. It turns out if you can manipulate peopleyou can invade a building in Washington without ever going there yourself.
  
-//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pp 96-97, Martin Ford2018//+//[[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/|EmTech Digital 2023]], May 2023//
  
 ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ====
arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.1685114884.txt.gz · Last modified: 2023/05/26 15:28 by katjagrace