User Tools

Site Tools


arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai

Views of prominent AI developers on risk from AI

This page is a draft, and out of date. It is incomplete and may contain errors.

People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.

Background

Evaluating risk from AI is difficult, but people who work professionally in developing AI are likely to have some advantages in making such judgments. While surveys can give us a more representative view of the field as a whole, the views of prominent individuals can be especially helpful. Reasons for this include:

  • They will often have best understanding of the technology
  • They may have better access to the state of the art and the goings on in the industry
  • They are often the people making decisions driving the behavior of the industry

Quotes cannot reliably give a clear and complete picture of someone's views, since they may lack context, be outdated, or misrepresentative in some other way. Although we have tried to avoid quotes which we know to be uncharacteristic of someone's views, we expect there to be strong selection bias, both in who is quoted and which words of theirs are being quoted.

Academic researchers

Yoshua Bengio, Université de Montréal, 2018 Turing Award recipient

I'm not concerned about [the warnings from people like Elon Musk and Stephen Hawking about an existential threat from super intelligent AI and getting into a recursive improvement loop], I think it's fine that some people study the question. My understanding of the current science as it is now, and as I can foresee it, is that those kinds of scenarios are not realistic. Those kinds of scenarios are not compatible with how we build AI right now.

Architects of Intelligence page 39, Martin Ford, 2018

[Prof. Bengio] noted that disagreement among AI experts was an important signal to the public that science did not have the answers as of yet. “If we disagree it means we don’t know . . . if it could be dangerous. And if we don’t know, it means we must act to protect ourselves,” said Bengio.

“If you want humanity and society to survive these challenges, we can’t have the competition between people, companies, countries — and a very weak international co-ordination,” he added.

Financial Times interview, May 2023

Prof. Bengio signed the open letter in March 2023 calling for a pause in training very large AI systems.

Stuart Russel, UC Berkeley, Professor

The problem with that is that if we succeed in creating artificial intelligence and machines with those abilities, then unless their objectives happen to be perfectly aligned with those of humans, then we’ve created something that’s extremely intelligent, but with objectives that are different from ours. And then, if that AI is more intelligent than us, then it’s going to attain its objectives—and we, probably, are not!

The negative consequences for humans are without limit. The mistake is in the way we have transferred the notion of intelligence, a concept that makes sense for humans, over to machines.

Architects of Intelligence pages 69, Martin Ford, 2018

In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst.

Podcast, March 7, 2023

If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans.

Lecture, April 5, 2023

Prof. Russel signed the open letter in March 2023 calling for a pause in training very large AI systems.

Gary Marcus, New York University, Professor

I’m not that worried about AI systems independently wanting to eat us for breakfast or turn us into paper clips. It’s not completely impossible, but there’s no real evidence that we’re moving in that direction. There is evidence, though, that we’re giving more and more power to those machines, and that we have no idea how to solve the cybersecurity threats in the near term.
I don’t completely discount [the AI control problem and recursive improvement], I’m not going to say the probability is zero but the probability of it happening anytime soon is pretty low. There was recently a video circulated of robots opening doorknobs, and that’s about where they are in development.

Architects of Intelligence pages 330-331, Martin Ford, 2018

Prof. Marcus signed the open letter in March 2023 calling for a pause in training very large AI systems.

AI Labs leaders & researchers

Sam Altman, CEO of OpenAI

Sam Altman seems to expect AI to have a large impact, with very large potential upsides, and very large risks.

I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.

New York Times, May 16, 2023

The good case is just so unbelievably good that you sound like a really crazy person to start talking about it. And the bad case—and I think this is important to say—is, like, lights out for all of us. I'm more worried about an accidental misuse case in the short term.

StrictlyVC Interview, January 17, 2023

Mr. Altman signed the open letter in March 2023 calling for a pause in training very large AI systems.

Dario Amodei, CEO, Anthropic

Dario has said he thinks there is a risk to developing AI, as well as a risk to not developing it.

every year that passes is a danger that we face and although AI has a number of dangers actually I think if we never built AI, if we don’t build AI for 100 years or 200 years, I’m very worried about whether civilization will actually survive. Of course on the other hand I mean I work on AI safety, and so I’m very concerned that transformative AI is very powerful and that bad things could happen either because of safety or alignment problems or because there’s a concentration of power in the hands of the wrong people, the wrong governments, who control AI. So I think it’s terrifying in all directions but not building AI isn’t an option because I don’t think civilization is safe.

Effective Altruism Global San Francisco, 2017 video | transcript

There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.

80,000 Hours Podcast, July, 2017

Dr. Amodei signed the open letter in March 2023 calling for a pause in training very large AI systems.

Greg Brockman, Co-Founder and President, OpenAI

The core danger with AGI is that it has the potential to cause rapid change. This means we could end up in an undesirable environment before we have a chance to realize where we’re even heading. The exact way the post-AGI world will look is hard to predict — that world will likely be more different from today’s world than today’s is from the 1500s. […] We do not yet know how hard it will be to make sure AGIs act according to the values of their operators. Some people believe it will be easy; some people believe it’ll be unimaginably difficult; but no one knows for sure

Testimony of Mr. Greg Brockman: Video | Transcript, June, 2018

Demis Hassabis, CEO, DeepMind

My view on [existential risk from AI] is that I’m in the middle. The reason I work on AI is because I think it’s going to be the most beneficial thing to humanity ever. I think it’s going to unlock our potential within science and medicine in all sorts of ways. As with any powerful technology, and AI could be especially powerful because it’s so general, the technology itself is neutral. It depends on how we as humans decide to design and deploy it, what we decide to use it for, and how we decide to distribute the gains.
A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all.

Architects of Intelligence page 184, Martin Ford, 2018

From Billy Perrigo at Time:

He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.

In Demis' own words:

I would advocate not moving fast and breaking things
When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful
I think we need to make sure that the benefits accrue to as many people as possible—to all of humanity, ideally.”
If you’re in a [world of] radical abundance, there should be less room for that inequality and less ways that could come about. So that’s one of the positive consequences of the AGI vision, if it gets realized.”

Time, January 12, 2023

Geoffrey Hinton, 2018 Turing Award recipient

If I were advising governments, I would say that there’s a 10 per cent chance these things will wipe out humanity in the next 20 years. I think that would be a reasonable number

Financial Times interview, Feb 2024

There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control

NPR interview, May 2023

My big worry is, sooner or later someone will wire into them the ability to create their own subgoals… I think it’ll very quickly realize that getting more control is a very good subgoal because it helps you achieve other goals… And if these things get carried away with getting more control, we’re in trouble.
And if [AI models] are much smarter than us, they’ll be very good at manipulating us. You won’t realize what’s going on… So even if they can’t directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.

EmTech Digital 2023, May 2023

Yann LeCun, Meta, Chief AI Scientist, 2018 Turing Award recipient

Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, and that we’ll create a human-level intelligence that will escape our control and all of a sudden robots will want to take over the world. The desire to take over the world is not correlated with intelligence, it’s correlated with testosterone.

Architects of Intelligence pg 141, Martin Ford, 2018

There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits…).
But I disagree that there is a high risk of accidentally building existential threats to humanity.

Facebook discussion with Stuart Russel and Yoshua Bengio, September 2019 Source | Edited version

I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.

LinkedIn, March, 2023

My first reaction to [the letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism,” said LeCun. “Why slow down the progress of knowledge and science? Then there is the question of products … I’m all for regulating products that get in the hands of people. I don’t see the point of regulating research and development. I don’t think that serves any purpose other than reducing the knowledge that we could use to actually make technology better, safer.

VentureBeat, April 7, 2023

Primary authors: Rick Korzekwa and Harlan Stewart. Thanks to Olivia Jimenez for providing many of the quotes and sources

arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.txt · Last modified: 2024/07/22 16:24 by katjagrace