Alice Fleerackers, Science in Society editor
It’s March 9, 2021, exactly one year after the first COVID-19-related death was reported in my home province of British Columbia. I wake up to see Canadian Doctors Speak Out trending on Twitter. Curious, I click through. There, I uncover heated streams of tweets, some calling out misinformation, others confidently sharing the facts of COVID. At the heart of the debate is an 11-minute-long video featuring six medical doctors in white lab coats. Armed with impressive-looking charts and figures, all six are promoting misleading or false claims about COVID-19.
This video is just one of many examples of misinformation that circulated during the COVID-19 infodemic. Last summer, 96 per cent of Canadians surveyed reported encountering at least one misleading or inaccurate claim, and new evidence suggests that Facebook communities spreading vaccine misinformation are rapidly growing. What sets the video apart from other examples is the role doctors played in promoting the flawed messages. As Canadians’ most trusted source of COVID-19 information, medical professionals hold incredible power over what we do and don’t believe.
All of this made me wonder: Why do we trust who we trust? What makes us believe some messages over others? To find out, I spoke with three experts about the fascinating science of credibility.
Scienceploitation: The power of perceived expertise
There’s a word for videos like the one I found on Twitter: scienceploitation.
At least, that’s the term Dr. Timothy Caulfield uses for messages that use legitimate science to promote false or misleading claims. A Canada Research Chair in Health Law and Policy, Caulfield studies how science is communicated to the public. He is especially interested in health misinformation and has identified countless examples of messages, like the doctor video, that use science’s cultural authority to promote a sinister agenda.
“Those pushing misinformation will often use hyped or twisted science to push their [messages],” he says. “Physicians use a little bit of science, a little bit of scientific language and then their authority to paint a particular picture.” Scienceploitation, Caulfield explains, is dangerous because scientists and doctors are highly trusted experts: “What using [scientific] language does is give a veneer of credibility,” he says. “It makes it more difficult for a debunker to counter it.”
This idea that expertise lends credibility is supported by a wealth of research. These studies find that including simple cues – like statistics or numbers or a white lab coat – in messages can have a profound influence on how trustworthy we perceive those messages to be.
Expertise is in the eye of the beholder
Of course, trusting experts isn’t necessarily a bad thing – just think where we would be today if we had ignored our public health officials back in March 2020. Yet, this strategy can lead us astray when our perceptions of expertise don’t reflect reality.
“Expertise is somewhat in the eye of the beholder,” says Jaigris Hodson, a Tier 2 Canada Research Chair in Digital Communication for the Public Interest. Hodson has been researching how Canadians evaluate and share information about COVID-19 online.
In a new study (currently under review), her team confirmed that people rely heavily on judgments of expertise when deciding whether or not to trust social media content. But the study also added nuance to the picture. While some of the participants’ judgments of expertise were accurate, Hodson notes that others were a little, well, squishier.
“[Participants’ judgments] reflected what they felt was important,” she says. “It might be life experience, or it might be academic training – and that academic training may or may not be health-related.” Simply put, people rely on perceived rather than earned expertise when deciding who to trust.
The peril of the gut check
Hodson’s research suggests expertise isn’t the only factor affecting peoples’ credibility judgments. Almost equally important are our pre-existing beliefs. “People also indicated that they’re more likely to trust information … when it has a feeling of rightness, you know, deep in their guts,” she says. “It confirms something that they already know to be true.”
This feeling of rightness that Hodson’s study identified aligns with what we know from other research. People have a tendency to pay more attention to information that supports what they already believe and to see information that challenges those beliefs as less credible. This so-called confirmation bias affects how we judge everything from climate change to health news. Confirmation bias may be even more dangerous on social media because we have to sift through so much bad information online and algorithms are often designed to show us more of what we’ve already watched, read, or liked.
Gut feelings and perceived expertise only tell part of the story, though. Both Hodson and Caulfield stress just how complicated credibility judgments are. Research has found that we’re also easily swayed by information that’s told as a story, communicated in simple language, or frequently repeated. Messages from people we already like are also viewed as more trustworthy, as are those that make us feel emotional or “fill gaps” in our understanding. The most nefarious disinformation campaigns, Caufield says, bring all of these persuasive elements together into one “slick passage” – a message that “becomes so persuasive it is difficult not to be persuaded.”
Separating credibility from pseudo-credibility
If you’re wondering how to tell those slick packages from the real deal, you’re not alone. Journalism scholar Marcelo Träsel has spent a lot of time thinking about exactly this question. In particular, he’s curious about which cues readers can use to distinguish credible journalism from pseudo-journalism, content that mimics credible journalism but spreads inaccurate or distorted information.
One avenue Träsel and his colleagues have explored are journalistic credibility indicators. These indicators include best practices for trustworthy and transparent reporting, such as providing author names and bios and backing up claims with reputable sources. First developed by international journalism consortium The Trust Project, they’re meant to help readers (and search engines) quickly assess the integrity of online news sources.
In 2019, Träsel and his colleagues examined how many of these credibility indicators made it into Brazilian news stories. The results were not encouraging. Most media outlets the team looked at used less than half the indicators, with even the highest-scoring outlet – BBC Brazil – failing to reach the 70 per cent mark.
While these results are disappointing, Träsel is hopeful that news readers can learn to identify credibility from pseudo-credibility. The answer, he believes, is critical thinking. “Just take a minute or two to really reflect,” he says. “Ask yourself if that person you’re listening to can know more about the [topic] than a government official or a doctor.”
Hodson and Caulfield agree. “If you see a headline and it makes you feel really happy or angry or scared, that’s the time to stop,” Hodson says. “Take a look around to really assess that information,”
“You want to be aware of all of those twisting forces, all those cognitive biases,” Caulfield says. “The key is to be aware.”
To learn more about how to detect misinformation, check out the following resources:
- First Draft: Think ‘Sheep’ before you share to avoid getting tricked by online misinformation
- Berkeley QB3: How to spot misleading science reporting
- The Conversation: 6 tips to help you detect fake science news
- Harvard Business Review: Outsmart your own biases
- Health Desk: COVID-19 Vaccine Media Hub