This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2023/05/23 18:06] rickkorzekwa Added quotes from Martin Ford's 2018 book |
arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai [2024/03/09 18:18] (current) katjagrace [Geoffrey Hinton, 2018 Turing Award recipient] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Views of prominent AI developers on risk from AI ====== | ====== Views of prominent AI developers on risk from AI ====== | ||
+ | |||
+ | //This page is in an early draft. It is incomplete and may contain errors.// | ||
People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides. | ||
Line 18: | Line 20: | ||
> I'm not concerned about [the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop], I think it's fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now. | > I'm not concerned about [the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop], I think it's fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now. | ||
- | / | + | //[[https:// |
- | ==== Gary Marcus, New York University, Professor ==== | + | > [Prof. Bengio] noted that disagreement among AI experts was an important signal to the public that science did not have the answers as of yet. “If we disagree it means we don’t know . . . if it could be dangerous. And if we don’t know, it means we must act to protect ourselves,” said Bengio. |
+ | > | ||
+ | > “If you want humanity and society to survive these challenges, we can’t have the competition between people, companies, countries — and a very weak international co-ordination, | ||
- | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | + | //Financial Times [[https:// |
- | > I don’t completely discount | + | Prof. Bengio signed |
- | + | ||
- | /[https://www.google.com/books/edition/_/ | + | |
==== Stuart Russel, UC Berkeley, Professor ==== | ==== Stuart Russel, UC Berkeley, Professor ==== | ||
Line 34: | Line 36: | ||
> The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, | > The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, | ||
- | / | + | //[[https:// |
> In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst. | > In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst. | ||
Line 43: | Line 45: | ||
// | // | ||
+ | |||
+ | Prof. Russel signed [[https:// | ||
+ | |||
+ | ==== Gary Marcus, New York University, Professor ==== | ||
+ | |||
+ | > I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term. | ||
+ | |||
+ | > I don’t completely discount [the AI control problem and recursive improvement], | ||
+ | |||
+ | // | ||
+ | |||
+ | Prof. Marcus signed [[https:// | ||
===== AI Labs leaders & researchers ===== | ===== AI Labs leaders & researchers ===== | ||
Line 57: | Line 71: | ||
// | // | ||
+ | |||
+ | Mr. Altman signed [[https:// | ||
==== Dario Amodei, CEO, Anthropic ==== | ==== Dario Amodei, CEO, Anthropic ==== | ||
Line 69: | Line 85: | ||
// | // | ||
+ | |||
+ | Dr. Amodei signed [[https:// | ||
==== Greg Brockman, Co-Founder and President, OpenAI ==== | ==== Greg Brockman, Co-Founder and President, OpenAI ==== | ||
Line 82: | Line 100: | ||
> A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all. | > A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all. | ||
- | / | + | //[[https:// |
From Billy Perrigo at Time: | From Billy Perrigo at Time: | ||
Line 102: | Line 120: | ||
==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ==== Geoffrey Hinton, 2018 Turing Award recipient ==== | ||
- | > I think there are two existential threats | + | >If I were advising governments, |
+ | |||
+ | //Financial Times [[https:// | ||
+ | |||
+ | > There' | ||
+ | |||
+ | //NPR [[https:// | ||
+ | |||
+ | >My big worry is, sooner or later someone will wire into them the ability to create their own subgoals... I think it’ll very quickly realize | ||
- | > I hope the rewards | + | >And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on... So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself. |
- | / | + | //[[https:// |
==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ==== Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient ==== | ||
Line 112: | Line 138: | ||
> Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, | > Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, | ||
- | / | + | //[[https:// |
> There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits...). | > There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits...). |