Whom do we trust more, computers or people?

Data provided by a person is regarded as less uncertain than cues that come from a computer...
11 April 2023

Interview with 

Marco Wittman, UCL

ARTIFICIAL INTELLIGENCE

ARTIFICIAL INTELLIGENCE

Share

In the last month, Italy has banned the ChatGPT chatbot on data security grounds, and 2000 technologists and researchers, entrepreneur Elon Musk among them, have signed an open letter calling for a temporary halt to the development of advanced AI platforms while the world - including regulators - catch up with the field. But alongside regulations, another key question is, how do we, as humans, relate and react to information presented to us by non-human sources? Do we trust it to a greater or lesser degree than the same information presented to us by another human being? Speaking with Chris Smith, UCL’s Marco Wittman has been probing some aspects of this by brain scanning volunteers being guided - by either human or inanimate cues -  to find a hidden target. Teachers will be reassured to hear that the uncertainty we place on non-human sourced information appears to be greater than when a person tells us something…

Marco - We were looking at a advice taking scenario where another person might give you a cue about how to solve a problem. And in our case we had like a very neutral circle with a dot on it, but you couldn't understand where the dot is unless you take the advice of another person who tells you about the location of that dot. We basically ran the same experiment twice, and in one case they were told that the cue about where the target is comes from a person, and in the other case it was a neutral image that told them that information.

Chris - So effectively it's a bit like I could go and look at a map on a wall or I could ask a person in the street, "how do I get to Hamley's in London?" Either way, I'm gonna get the same information, but it's whether or not I treat it differently when it comes from the person versus the map on the wall?

Marco - Yeah, that is right. So we had participants come in to do the experiment in an MRI scanner. They saw the same person or the same neutral object repeatedly. That personal object always gives them a clue about where on the map something was, was hidden. And then we, we checked how people behave in the experiment and how much they take that advice on board over time.

Chris - How does reliability factor into this? Because if mm-hmm <affirmative>, I interact with a human and I know that they're a pathological liar, I'm more likely actually to trust the map! On the other hand. Mm-hmm. <affirmative>, if I know that person's dead rock solid, reliable, and my map reading skills aren't up to scratch, I might make the opposite decision. So how do you control for that?

Marco - That's a very good point. And these are indeed sort of follow up questions. In our case where we're cuing there basically they're neutral intuitions about a human advisor by not giving them any specific information about the identity of that person. So you were just told the information comes from a person, but you were not told a about what that person is like.

Chris - And how do the two scenarios differ? Do people treat the information completely impartially - person versus inanimate object - or do they show a bias in favour of one or the other?

Marco - They do show a bias. To stick with the map example, imagine that you repeatedly get advice from the personal, the non-social cue about the location on, on the map. That advice will always be off to a certain degree. You know that the advisors never perfect, at least not in our experiment. So what people should do is basically they should draw a circle around the suggested location and say, this is how much I trust the advice. And so we found that if the advice comes from a person, that degree of trust is more stable. So it changes less over the course of the experiment.

Chris - Whereas if it comes from an inanimate object, what the degree of uncertainty that the participant applies is greater?

Marco - Yeah, that's exactly right. So that means that sometimes with the inanimate object you will trust it more: draw very small circle than you would trust less and draw a very big circle. So it is as if, as you say, you're just a bit more uncertain in how good the advice is when it comes from the non-social cue compared to the person.

Chris - And what's going on in the brain that makes them behave differently - the participants - like this?

Marco - We were looking specifically at parts of the brain that are part of the mentalising network. These brain regions really have to do with figuring out someone else's beliefs and intentions. So we were looking at how they, the neural representation of the advisor was over over time. So whether brain activity in those regions looked very similar when they saw the same advisor, whether they looked very different, if they saw different advisors. And overall we found that for human advisor, um, the patterns were more stable than for the inanimate objects in this brain network.

Chris - What are the implications of this? The world's been awash with headlines about AI and AI chatbots and inherent trust or distrust of those sorts of things among users. Is that why you were interested in exploring this sort of direction? That increasingly as we become more closely aligned with what machines are doing and we make them more part of our day-to-day life and communication that we need to know how people treat and regard information coming from a machine?

Marco - That's definitely one of the implications that we're now thinking about. Um, but actually when setting up that study, we were influenced by how humans tick and that we might be better in learning from another person compared to learning from direct experience. That might be something that explains how humans work because you know, we rarely learn stuff from scratch like maths in school. You know, it would take all of us many years to figure that out on our own. Instead, we often learn from other people about the world. So there's um, really the question, if there's some sort of brain mechanisms that enable us to do that particularly well to learn from other humans as compared to direct experience,

Chris - Would it be then that you take what you've identified as the ideal outcome in the brain when you learn or take information from a human and try to make computer interfaces that get the same reaction in a subject so that people inherently trust and react best to that source? Or is it that more fundamentally this says humans make the best teachers and all these efforts to do digital education and self-help courses are probably gonna be less effective in the long run than if there's a person interacting with that person?

Marco - I think it does say that if you believe that something's coming from a person, you approach the information with a different attitude and that's not quite there in the same way for a chatbot. That being said, it would be interesting to repeat the experiment and to look at exactly those questions, how you can sort of modify the non-social information source and give it different identities and that could then make how you learn from it more similar or more dissimilar compared to how you learn from other humans.

Comments

Add a comment