Godfather of AI steps down and 'regrets work'

The "Godfather" of AI resigns at Google, citing fears around the danger of AI
05 May 2023

Interview with 

Josh Cowls, Oxford University

AI

Artificial intelligence

Share

The man widely considered to be “the godfather” of artificial intelligence has decided to leave his job at Google after warning about the growing dangers of AI. Geoffrey Hinton said that he was leaving the tech giant and that he regretted his work around artificial intelligence. I’ve been speaking to Josh Cowls from the Oxford Internet Institute at the University of Oxford about what some of those dangers are, beginning with why they seem to be so poorly defined…

Josh - I think a lot of the problems with communicating the potential risks and harms of AI are the multiple different timescales that we can think of. So some people would like to talk about the potential of AI to sort of take over humanity or, or make us, uh, work for it rather than vice versa versus other scientists. And I would consider myself among this group who think a bit more about the current and near term potential risks of AI. Now, the reason that gets a bit complicated is because the scientific progress of AI is moving really rapidly, and people like Geoff Hinton talk about being a bit surprised about how quickly the technology has moved on. So for some people, that brings the longer term harms more into the focus. But I think for me, some of the more concrete potential harms of AI that are sort of staring us in plain sight are also important to take a look at as well.

Chris -
And what are they?

Josh - So some of them are around bias. So AI is trained on large sets of techs, often from the internet. So that means that AI is basically in some ways being trained to reflect the society in which it's participating. And of course, society isn't perfect, and the kind of biases which can come across through text might be relayed into AI. Another concern is around what we call disinformation, which is the risk that AI can spread falsehoods, or things that seem true, but aren't actually true. For example, the New York Times recently did a test of the open AI ChatGPT system to see how it responded to a prompt about a conference, about the founding of AI in 1958. And it found that that response from open AI created numerous falsehoods which really didn't happen. They claimed that the New York Times had a front page story about this AI conference. That wasn't true, and many other things weren't true either. The trouble with AI systems is that we can't necessarily tell the difference between what is true and what is false when it's coming out of the text boxes with which we interact with it.

Chris - Yes, indeed. I looked up what ChatGPT thought about The Naked Scientists, and it told me that Patrick Moore, the famous astronomer, used to be in The Naked Scientists and in fact helped found it, which I wish that were true, but it's not. Beyond that though, how do you see this being placed in society? How do you see it being used?

Josh - I think the really significant contribution is this conversational style which they adopt. So I think one of the dangers they introduce is the fact that we might be lulled into thinking of these systems as in some ways intelligent or even sentient beyond their actual capacity. And so that might shape and turn how we make decisions because we've long had assisted decision making in things like criminal justice, also health, these assistance are increasingly being used, but how to do that within a safe and ethical framework is the really important thing. And I think the danger with these kind of off the shelf products like ChatGPT, is that we start to take what they say as true, but also that we start to sort of incorporate those statements and those responses into our day-to-day lives in ways that are quite hard to trace back to the AI system, let alone how the AI system came to that decision. So I think what's really interesting about this most recent generation of AI chatbots is their conversational ability, their ability to make us feel like they are listening to us, responding to us, and understanding what we mean. And that they kind of know what they're replying back to us which isn't really the case.

Chris - One of the issues though with all of this is that it's not explainable. If you ask people who work in AI how it works, even Google <laugh> say that they don't understand how their system produces some of the outcomes and outputs that it does. It's not so-called explainable. It can't tell you how it reached the conclusion it did. And that makes people inherently uncomfortable because throughout societies and how we do things, we document things, we take minutes at meetings, and then we explain why we've made the decision that we have. And if we have a black box where we put inputs in and outcomes come out and we don't know what connects it to the middle, that seems to me to be extremely disconcerting.

Josh - That's right. Explainability is another major challenge for AI, both technically with people working on the ground to try to make AI systems more explainable. And this is where I think regulation is really going to come into it. The European Union is coming up with its own artificial intelligence act which creates new safeguards and standards for companies deploying AI systems. And it may well be that those standards are adopted elsewhere as well, which could push the owners back onto the companies developing and deploying these systems. The worry is that when these systems become so embedded in work, in life, and everything else, it may be that the genie is too far out the bottle and that some of these outputs have already worked their way into the messiness of human life in a way that it's tough to extricate. But I'm sure that governments and others will be looking at ways to try to respond to that.

Comments

Add a comment