User Tools

Site Tools


arguments_for_ai_risk:views_of_ai_developers_on_risk_from_ai

This is an old revision of the document!


Views of prominent AI developers on risk from AI

People who have worked on creating artificial intelligence have a variety of views on risk from AI, both for the potential benefits and potential downsides.

Background

Evaluating risk from AI is difficult, but people who work professionally in developing AI are likely to have some advantages in making such judgments. While surveys can give us a more representative view of the field as a whole, the views of prominent individuals can be especially helpful. Reasons for this include:

  • They will often have best understanding of the technology
  • They may have better access to the state of the art and the goings on in the industry
  • They are often the people making decisions driving the behavior of the industry

Quotes cannot reliably give a clear and complete picture of someone's views, since they may lack context, be outdated, or misrepresentative in some other way. Although we have tried to avoid quotes which we know to be uncharacteristic of someone's views, we expect there to be strong selection bias, both in who is quoted and which words of theirs are being quoted.

Academic researchers

Stuart Russel, UC Berkeley, Professor

In the last 10 years or so I've been asking myself what happens if I or if we as a field succeed in what we've been trying to do which is to create AI systems that are at least as general in their intelligence as human beings. And I came to the conclusion that if we did succeed it might not be the best thing in the history of the human race. In fact, it might be the worst.

Podcast, March 7, 2023

If we pursue [our current approach], then we will eventually lose control over the machines. But, we can take a different route that actually leads to AI systems that are beneficial to humans.

Lecture, April 5, 2023

AI Labs leaders & researchers

Sam Altman, CEO of OpenAI

Sam Altman seems to expect AI to have a large impact, with very large potential upsides, and very large risks.

I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.

New York Times, May 16, 2023

The good case is just so unbelievably good that you sound like a really crazy person to start talking about it. And the bad case—and I think this is important to say—is, like, lights out for all of us. I'm more worried about an accidental misuse case in the short term.

StrictlyVC Interview, January 17, 2023

Dario Amodei, CEO, Anthropic

Dario has said he thinks there is a risk to developing AI, as well as a risk to not developing it.

every year that passes is a danger that we face and although AI has a number of dangers actually I think if we never built AI, if we don’t build AI for 100 years or 200 years, I’m very worried about whether civilization will actually survive. Of course on the other hand I mean I work on AI safety, and so I’m very concerned that transformative AI is very powerful and that bad things could happen either because of safety or alignment problems or because there’s a concentration of power in the hands of the wrong people, the wrong governments, who control AI. So I think it’s terrifying in all directions but not building AI isn’t an option because I don’t think civilization is safe.

Effective Altruism Global San Francisco, 2017 video | transcript

There’s a long tail of things of varying degrees of badness that could happen. I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.

80,000 Hours Podcast, July, 2017

Greg Brockman, Co-Founder and President, OpenAI

The core danger with AGI is that it has the potential to cause rapid change. This means we could end up in an undesirable environment before we have a chance to realize where we’re even heading. The exact way the post-AGI world will look is hard to predict — that world will likely be more different from today’s world than today’s is from the 1500s. […] We do not yet know how hard it will be to make sure AGIs act according to the values of their operators. Some people believe it will be easy; some people believe it’ll be unimaginably difficult; but no one knows for sure

Testimony of Mr. Greg Brockman: Video | Transcript, June, 2018

Demis Hassabis, CEO, DeepMind

From Billy Perrigo at Time:

He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before.

In Demis' own words:

I would advocate not moving fast and breaking things
When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful
I think we need to make sure that the benefits accrue to as many people as possible—to all of humanity, ideally.”
If you’re in a [world of] radical abundance, there should be less room for that inequality and less ways that could come about. So that’s one of the positive consequences of the AGI vision, if it gets realized.”

Time, January 12, 2023

Yann LeCun, Chief AI Scientist, Meta

There will be mistakes, no doubt, as with any new technology (early jetliners lost wings, early cars didn't have seat belts, roads didn't have speed limits…).
But I disagree that there is a high risk of accidentally building existential threats to humanity.

Facebook discussion with Stuart Russel and Yoshua Bengio Source | Edited version

I think that the magnitude of the AI alignment problem has been ridiculously overblown & our ability to solve it widely underestimated.

LinkedIn, March, 2023

My first reaction to [the letter] is that calling for a delay in research and development smacks me of a new wave of obscurantism,” said LeCun. “Why slow down the progress of knowledge and science? Then there is the question of products … I’m all for regulating products that get in the hands of people. I don’t see the point of regulating research and development. I don’t think that serves any purpose other than reducing the knowledge that we could use to actually make technology better, safer.

VentureBeat, April 7, 2023

Primary authors: Rick Korzekwa and Harlan Stewart. Thanks to Olivia Jimenez for providing many of the quotes and sources

arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.1684790947.txt.gz · Last modified: 2023/05/22 21:29 by rickkorzekwa