By Katie Compton, Policy & Politics editor
Like many people, I typically scroll through my social media feeds when I’m taking a break from work or connecting with family and friends. I view these online spaces as places where my brain can go to zone out for a little while.
When the Freedom Convoy participants paralyzed downtown Ottawa for several weeks, ostensibly to protest COVID-19 mandates, I felt a reflex-like need to vent my own anger about the situation online. But something prevented me from rage tweeting or launching into a Facebook rant: the specter of misinformation.
As I watched the protests in real time, I felt like I was observing people who lived in a completely different reality. I thought about the role that each of us plays in spreading misinformation online and how crises like the pandemic create opportunities for disinformation to creep into the public discourse. All of this makes it even harder for people to agree on what’s real – let alone what’s right.
Disinformation vs. Misinformation
Disinformation is generally defined as misinformation that is intentional. If your disgruntled uncle shares some false information that he read on Facebook that he believes is true, that’s misinformation. If someone – a person, government, or troll farm – is spreading information that they know is false as part of a specific agenda, that’s disinformation.
However, according to Dr. Ahmed Al-Rawi, Assistant Professor at the School of Communication at Simon Fraser University and founder of The Disinformation Project, the line between misinformation and disinformation can be blurry. It’s often impossible to determine the intent behind an individual piece of false information and the end result is often the same. “Whether intentional or not, this kind of false information often attempts to disrupt the status quo,” says Dr. Al-Rawi. For this reason, he finds it more useful to examine disruptive information.
Dr. Al-Rawi started The Disinformation Project, which aims to study how disinformation disrupts the Canadian news landscape, in the wake of the 2016 U.S. presidential election when discussion of fake news started to grow. Dr. Al-Rawi studied the use of this term on Twitter and Instagram and watched it become highly politicized. “It has become a buzzword, or a hollow term often used by politicians and interest groups to discredit their opponents,” says Dr. Al-Rawi. “It’s being used as a weapon to attack anyone who would oppose your own ideology or political views.”
What should we watch out for?
When I asked Dr. Al-Rawi about some of the red flags that people should watch out for if they are trying to spot disinformation, he said that divisive topics are ripe for promoting disruptive information, whether it is climate change, the pandemic, or the war in Ukraine. “[The war in Ukraine] is probably the best example of what is going on in terms of disruptive information,” says Dr. Al-Rawi. “On the one hand, [we have] Ukraine and the other Western countries supporting [Ukraine]. And on the other hand, we have Russia and a few other countries supporting [Russia]. And there is an information operation happening between the two parties, each one trying to prove their own point.”
Unfortunately, our brains aren’t equipped to vet the tidal wave of information that we encounter each day, especially when it comes to emotionally charged topics. Stopping to assess the source and reliability of information requires effort and our minds are wired to take short-cuts. According to Dr. Gordon Pennycook, Assistant Professor of Behavioural Science at the University of Regina’s Hill/Levene Schools of Business, there are two ways that our brains deal with information: through intuition or deliberation. Intuitive thinking enables us to recall certain facts and make decisions quickly, without having to do much cognitive work. By contrast, when we approach new information in a deliberative way, we take time to pause, analyze, and ask questions.
Intuitive thinking is a feature, not a bug, but it makes us vulnerable to disinformation, according to Dr. Pennycook. “Our brain is really good at certain things, but sometimes our intuitions aren’t accurate. Someone can construct something that seems true to people, or at least draws their attention, but that isn’t [true]. And to override that, to figure out whether it’s true or not, you have to stop and reflect.”
In many ways, social media is the optimal platform for disseminating disinformation by over-riding our ability to think about and assess information in a deliberative way. “You’re in kind of a vulnerable position, where someone can take advantage of the fact that you’re not really engaging in a thoughtful way with what you’re seeing,” says Dr. Pennycook.
Is flagging posts and fact checking enough?
I asked both Dr. Al-Rawi and Dr. Pennycook about social media companies’ responsibilities for stemming the spread of disinformation. They both think that these companies need to step up their efforts in this regard.
Dr. Pennycook pointed to evidence showing that simply flagging inaccurate posts can backfire. “After the 2016 [U.S.] election, Facebook put these ‘disputed by third-party fact checker’ warning labels on misinformation,” says Dr. Pennycook. “But [we found that] false headlines that don’t have labels were then viewed as more accurate. People think that if it doesn’t have a label, then it must be true, or it’s been verified as being true.”
Disinformation spreaders have also begun to hijack the concept of fact-checking. Dr. Al-Rawi pointed to the Russian government’s efforts to use fake fact checks on state media and amplify websites like War on Fakes. This site frames itself as a “non-political” group of journalists providing “unbiased” information about what’s happening in Ukraine. However, other sources, such as DW.com and the Digital Forensic Research Lab, claim that the site uses a combination of legitimate debunking of misleading images from social media and fake debunking of credible news reports to spread Russian state propaganda. “That’s really, really troubling because the end result is confusing people and making them feel that they don’t know where to stand or what is real and what is not,” says Dr. Al-Rawi.
Unfortunately, both researchers think there are no real incentives for profit-driven social media companies to tackle the problem. So, we can’t expect them to stem the flow of disinformation for us. “I think it’s a joint effort that we all need to work together on because if we don’t, we will all lose in the end,” says Dr. Al-Rawi.
What might that collective effort look like? In my next post, I’ll highlight some of the emerging evidence for how to best combat disinformation and Canadian initiatives that give people the tools to sort out what is true from what is false.
Feature image: Figuring out what is true and what is false in the news and on social media requires extra mental effort. Image by Anton Melnyk for iStock.