User Tools

Site Tools


arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/23 18:06]
rickkorzekwa Added quotes from Martin Ford's 2018 book
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current)
katjagrace [Geoffrey Hinton, 2018 Turing Award recipient]
Line 1: Line 1:
 ====== Views of prominent AI developers on risk from AI ====== ====== Views of prominent AI developers on risk from AI ======
 +
 +//This page is in an early draft. It is incomplete and may contain errors.//
  
 People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.
Line 18: Line 20:
 > I'm not concerned about [the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop], I think it's fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now. > I'm not concerned about [the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop], I think it's fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now.
  
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence] page 39, Martin Ford, 2018/+//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] page 39, Martin Ford, 2018//
  
-==== Gary MarcusNew York UniversityProfessor ====+> [Prof. Bengio] noted that disagreement among AI experts was an important signal to the public that science did not have the answers as of yet. “If we disagree it means we don’t know . . . if it could be dangerous. And if we don’t knowit means we must act to protect ourselves,” said Bengio. 
 +
 +> “If you want humanity and society to survive these challenges, we can’t have the competition between people, companies, countries — and a very weak international co-ordination,” he added.
  
-> I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clipsIt’s not completely impossible, but there’s no real evidence that we’re moving in that directionThere is evidencethough, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term.+//Financial Times [[https://www.ft.com/content/b4baa678-b389-4acf-9438-24ccbcd4f201|interview]]May 2023//
  
-> I don’t completely discount [the AI control problem and recursive improvement], I’m not going to say the probability is zero but the probability of it happening anytime soon is pretty low. There was recently a video circulated of robots opening doorknobs, and that’s about where they are in development. +Prof. Bengio signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems.
- +
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligencepages 330-331, Martin Ford, 2018/+
  
 ==== Stuart Russel, UC Berkeley, Professor ==== ==== Stuart Russel, UC Berkeley, Professor ====
Line 34: Line 36:
 > The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, a concept that makes sense for humans, over to machines. > The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, a concept that makes sense for humans, over to machines.
  
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence] pages 69, Martin Ford, 2018/+//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pages 69, Martin Ford, 2018//
  
 > In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst. > In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst.
Line 43: Line 45:
  
 //[[https://www.youtube.com/watch?v=ISkAkiAkK7A|Lecture]], April 5, 2023// //[[https://www.youtube.com/watch?v=ISkAkiAkK7A|Lecture]], April 5, 2023//
 +
 +Prof. Russel signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems.
 +
 +==== Gary Marcus, New York University, Professor ====
 +
 +> I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term.
 +
 +> I don’t completely discount [the AI control problem and recursive improvement], I’m not going to say the probability is zero but the probability of it happening anytime soon is pretty low. There was recently a video circulated of robots opening doorknobs, and that’s about where they are in development.
 +
 +//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pages 330-331, Martin Ford, 2018//
 +
 +Prof. Marcus signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems.
  
 ===== AI Labs leaders & researchers ===== ===== AI Labs leaders & researchers =====
Line 57: Line 71:
  
 //[[https://www.youtube.com/watch?v=ebjkD1Om4uw&t=1257s|StrictlyVC Interview]], January 17, 2023// //[[https://www.youtube.com/watch?v=ebjkD1Om4uw&t=1257s|StrictlyVC Interview]], January 17, 2023//
 +
 +Mr. Altman signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems.
  
 ==== Dario Amodei, CEO, Anthropic ==== ==== Dario Amodei, CEO, Anthropic ====
Line 69: Line 85:
  
 //[[https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/|80,000 Hours Podcast]], July, 2017// //[[https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/|80,000 Hours Podcast]], July, 2017//
 +
 +Dr. Amodei signed [[https://futureoflife.org/open-letter/pause-giant-ai-experiments/|the open letter]] in March 2023 calling for a pause in training very large AI systems.
  
 ==== Greg Brockman, Co-Founder and President, OpenAI ==== ==== Greg Brockman, Co-Founder and President, OpenAI ====
Line 82: Line 100:
 > A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all. > A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all.
  
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence] page 184, Martin Ford, 2018/+//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] page 184, Martin Ford, 2018//
  
 From Billy Perrigo at Time: From Billy Perrigo at Time:
Line 102: Line 120:
 ==== Geoffrey Hinton, 2018 Turing Award recipient ==== ==== Geoffrey Hinton, 2018 Turing Award recipient ====
  
-> I think there are two existential threats that are much bigger than AIOne is global nuclear war, and the other is [synthetic biology]. I think that's what people should be worried aboutnot ultra-intelligent systems.+>If were advising governments, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next 20 years. I think that would be a reasonable number 
 + 
 +//Financial Times [[https://www.ft.com/content/c64592ac-a62f-4e8e-b99b-08c869c83f4b|interview]]Feb 2024// 
 + 
 +> There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control 
 + 
 +//NPR [[https://www.npr.org/2023/05/28/1178673070/the-godfather-of-ai-sounds-alarm-about-potential-dangers-of-ai|interview]], May 2023// 
 + 
 +>My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals... And if these things get carried away with getting more controlwe’re in trouble.
  
-I hope the rewards [of AI] will outweigh the downsidesbut I don'know whether they willand that's an issue of social systemsnot with the technology.+>And if [AI modelsare much smarter than usthey’ll be very good at manipulating us. You won’realize what’s going on... So even if they can’t directly pull leversthey can certainly get us to pull levers. It turns out if you can manipulate peopleyou can invade a building in Washington without ever going there yourself.
  
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligencepp 96-97Martin Ford, 2018/+//[[https://www.technologyreview.com/2023/05/03/1072589/video-geoffrey-hinton-google-ai-risk-ethics/|EmTech Digital 2023]], May 2023//
  
 ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ====
Line 112: Line 138:
 > Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, and that we’ll create a human-level intelligence that will escape our control and all of a sudden robots will want to take over the world. The desire to take over the world is not correlated with intelligence, it’s correlated with testosterone. > Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, and that we’ll create a human-level intelligence that will escape our control and all of a sudden robots will want to take over the world. The desire to take over the world is not correlated with intelligence, it’s correlated with testosterone.
  
-/[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence] pg 141, Martin Ford, 2018/+//[[https://www.google.com/books/edition/_/e4d7DwAAQBAJ|Architects of Intelligence]] pg 141, Martin Ford, 2018//
  
 > There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits...). > There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits...).
arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.1684865213.txt.gz · Last modified: 2023/05/23 18:06 by rickkorzekwa