Tiny number of supersharers cause 80% of web misinformation

And if social media sites needs a credibility meter...
31 May 2024

Interview with 

Sander van der Linden, University of Cambridge

Share

In recent years we’ve all become much more familiar with the threat posed by online misinformation. It can undermine confidence in health initiatives like vaccines, and skew public opinion around politics. Now two new papers published in the journal Science shed some light on how potent different sorts of misinformation are at influencing members of the public, and who is behind their creation. One notable observation to emerge from the research is that information that’s not wrong, but could be interpreted the “wrong way” by audiences is extremely powerful. To quote the summary written by the journal, “factual yet misleading vaccine content was 46 times more effective at driving vaccine hesitancy than flagged misinformation.” Sander van der Linden, from the University of Cambridge, works on how we’re affected by misinformation. He wrote an opinion piece alongside the new research this week…

Sander - So they wanted to see if A) what is the causal effect of being exposed to misleading information about vaccines on people's intention to get vaccinated? And B) how can you estimate the total damage from that? And can we estimate maybe even how many lives could have been saved if people hadn't been exposed to this type of misinformation?

Chris - What do we call misinformation though? Because I put it to you the Chicago Tribune, during all this, led with a headline that said a doctor died two weeks after having the Covid jab. But it doesn't go on to say, and that was because he was run over or he had a heart attack, which was nothing to do with the vaccine.

Sander - Exactly. That headline was one of the most viral headlines because millions and millions of people were exposed to it. And the exact question there is, a healthy doctor did die two weeks after getting the vaccine, right? But it falsely implies causation where there's only correlation. And that is a well-known logical fallacy. And so that's why it's misleading, even though the words technically are true. But what they wanted to do in this study is find out how persuasive are total falsehoods compared to these more subtly misleading statements. And what they found was that this misleading type of headline, true but misleading, was 46 times more damaging in a way than the effect of outright falsehoods on people's intentions to get the vaccine.

Chris - Why do you think that is?

Sander - Well, from a technical perspective, the answer is reach. So if you look at outright falsehoods, they're not shared by that many people. So their reach is fairly limited. So if you look at the maths in their papers, basically what they try to estimate is what's the percent reduction in your intention to get vaccinated? And then they need to multiply that by the actual exposure on social media. And so outright falsehoods have actually quite low exposure, whereas mainstream media that's highly misleading has huge exposure. And so when you combine the effects from persuasion in the lab with huge numbers of millions of people being exposed, that's where you do quite a lot of damage.

Chris - Is the take home then that more responsible reporting is needed? Because it sounds like that was just a shoddy headline that was clickbaity. Is that really what we're talking about here? That we just need more responsible journalism?

Sander - Yes, in a way, I think we do need more responsible journalism. And that means fewer clickbait headlines, fewer emotional manipulation, fewer articles that have a clear or biassed slant. But the other thing we need is maybe to broaden our understanding of what is misinformation. And so there's some consensus among experts that it shouldn't be just outright falsehoods because that's a fairly small proportion of what we see on a daily basis. The bigger problem is misleading content, right? There's some nugget of truth in there, but it's otherwise manipulative. And I think that is the bigger problem and the bigger part of the definition. So when we think about misinformation, we shouldn't only think of flat earthers, but also about mainstream content that is highly misleading.

Chris - And the other paper looks at something very relevant right now, both sides of the Atlantic, that's elections and election bias and spreading misinformation. They say a staggering amount, 80% plus, of the misinformation originated from a tiny fraction.

Sander - Yeah. And these are called super spreaders or super sharers. And that's a more general finding that we find that most of the fake news comes from a small group of highly networked, highly influential individuals supplying up to a quarter of the falsehoods that their followers see in their newsfeed. So if you are on Twitter, about a quarter of the falsehoods you might see might be coming from a super spreader or a super sharer that you're following, intentionally or on accident. I'm assuming nobody in the audience in your audience is following any super sharers, but if there were the case, that's basically what what they found. And now I will say that the difference with the other study is that they focus on outright fake news. So if you put two and two together, what we haven't looked at is who are the super sharers of misleading information. Because we know misleading information can be more impactful, but who are the super shares of generally misleading content? Because they're the ones exacerbating the reach. So you were basically coming back to the Chicago Tribune headline. That was actually influential amongst anti-vaccination activists as you predicted. It was a mainstream outlet that published it, but it was pushed by anti-vaccination groups on Facebook. Similarly, there was another study recently that showed that there's about 50 physicians in the US that were responsible for a huge amount, a disproportionate amount of Covid misinformation. So there's these networks of highly influential actors that are amplifying fake as well as misleading content. And in fact, if you look at the motives of super spreaders, they're quite diverse. For some it's fame, for others it's financial. And so to what extent we’re able to change their motives is an open question. Maybe it's more realistic to try to somehow intervene, to make sure that their followers are not exposed to it. Maybe that's a more plausible route than actually trying to change the super spreaders themselves.

Chris - So persuading the social network platforms themselves to do something. One person I spoke to, she did a study looking at whether you could add a button. Instead of just like and dislike, an 'I trust this' content. And then you are basically pinning your integrity to the message and people will be a bit more hesitant to pin their integrity on something that might be dubious rather than just like it and share it.

Sander - Exactly. And I think misleading content is especially problematic, aligns well with the research that we do, which is about helping people spot manipulation in content wherever they may see it. But I was going to say, we ran out of space, so the editors kicked out my final paragraph, which was about this newer study that shows that you can actually try to change the incentives too for super spreaders and their followers by implementing some sort of credibility metre. And so the idea is that if you keep floating untrustworthy content that's going to hurt a visible public credibility metre that might be assigned to you. And so nobody wants to put their reputation at stake if it actually means they're going to lose points, or they're gonna lose likes, or the audience might think that they're not credible. So assigning trustworthiness ratings to social media accounts could provide a different type of incentive. And I think it's not a bad idea.

Comments

Add a comment