COVID-19 'Misinformation': British Medical Journal Explains The Trouble With Fact-Checkers

By Cameron English — Jun 15, 2021
Social media censorship has exploded since the beginning of the pandemic, in large part thanks to the proliferation of so-called "fact-checkers." While efforts to limit the spread of false information online seem sensible, experts are starting to point out the downsides of tech companies moderating scientific disputes.
Image by Werner Moser from Pixabay

In the abstract, fact-checking seems like a noble practice. Objective journalists and social media platforms, informed by experts, take a good, hard look at the claims of politicians and other high-profile figures and evaluate them for accuracy, helping the public separate truth from nonsense.

The reality is far different. Fact-checkers are anything but objective arbiters of truth; an allegation respected publications like the British Medical Journal (BMJ) are beginning to take seriously. In a May 25 opinion piece in the BMJ, freelance journalist Laurie Clarke went as far as asking whether social media platforms are qualified to separate fact from falsehood when it comes to complex science topics. During the pandemic, she noted,

Facebook has removed 16 million pieces of its content and added warnings to around 167 million. YouTube has removed more than 850, 000 videos related to 'dangerous or misleading covid-19 medical information.'  

While a portion of that content is likely to be willfully wrongheaded or vindictively misleading, the pandemic is littered with examples of scientific opinion that have been caught in the dragnet—resulting in their removal or de-prioritisation, depending on the platform and context … prompting the bigger question of whether social media platforms … should be tasked with this at all.

I'll go a step further. Fact-checking, at least as it's performed by the press and tech companies, inevitably devolves into an ideological charade that does more harm than good. We'd all be better off without it. 

There's plenty of recent examples. Facebook, for instance, changed its policy on sharing stories about a possible coronavirus lab leak because the Biden Administration determined it was no longer a conspiracy theory. The press called some 2020 political rallies COVID super spreader events while only warning participants in other events to be cautious. CNN hyped the benefits of organic food consumption, which lack any supporting evidence, while warning readers to avoid coronavirus “misinformation.”

The examples go on, but you get the point. Fact-checking is often tainted by the biases of the fact-checkers. And this means it risks contributing to the problem it's trying to solve.

How do we define 'misinformation'?

Consider this fundamental question: on what basis do fact-checkers operate? Presumably, there's an established body of scientific knowledge that guides their decisions. But in 2020, when our knowledge of the pandemic changed almost weekly, conclusive data of this sort rarely existed. As the BMJ article explained,

... the pandemic has seen a shifting patchwork of criteria employed by these [social media] companies to define the boundaries of misinformation. This has led to some striking U turns: at the beginning of the pandemic, posts saying that masks helped to prevent the spread of covid-19 were labeled 'false'; now it’s the opposite, reflecting the changing nature of the academic debate and official recommendations.

Reporters and social media companies were forced to pick sides in scientific debates that weren't settled; they had to decide who was credible and who wasn't based on something besides evidence. Health Feedback, one of the many organizations that performs COVID-related fact-checking for Facebook, unintentionally confirmed this when it said “it won’t select scientists to verify claims if they’ve undermined their credibility by 'propagating misinformation, whether intentionally or not.'”

This is viciously circular logic, of course. How do they know some scientists propagated misinformation? Because their fact-checkers confirmed it. Why are those fact-checks reliable? Because their scientists don't propagate misinformation.

Dizziness-inducing fallacies aside, this kind of policy comes with unfortunate side effects, the most important being that it stifles the mechanism for filtering out bad ideas—open debate. Science has a long (albeit messy) history of eventually getting things right. But, as a 2016 American Biology Teacher article explained,

… contrary evidence does not magically and conveniently appear on its own. Errors rarely announce themselves. They can go completely unnoticed without the appropriate perceptual filters. Or they can be dismissed as artifacts or unusual exceptions. Alternative ideas must take hold, allowing one to identify possible blind spots and where, precisely, relevant new evidence might be telling.

In the context of the pandemic, these blinds spots were arguably reinforced because scientists can build careers and earn notoriety by pushing back against bad information, as Nature noted in June 2020. That's all well and good, but it becomes a problem when academics attack at each other in public forums, as they did all throughout last year, and there are rewards for being on the “right” side of an issue.

“[T]his has fed a perverse incentive for scientists to label each other’s positions misinformation or disinformation,” Clarke wrote. Much like the former president dismissed his critics for peddling fake news, she noted, scientists attacked the character of other scientists instead of engaging in scholarly debate based on evidence. Anti-GMO groups and their allies in the press tried the same ploy six years ago when they smeared biotechnology researchers as academic mercenaries for Monsanto. The point was to disqualify certain scientists, providing the perfect excuse to ignore the quality of their work. 

Long-term consequences

With high-profile journals like the BMJ getting involved, maybe this incendiary situation will calm down. Then again, if governments continue pressuring social media platforms to censor “misinformation” about topics beyond COVID-19 and academics benefit from helping them, we might be in for trouble. Science will be little more than a political weapon utilized by partisans to facilitate their agendas. Quoting Jevin West, associate professor in the Information School at the University of Washington in Seattle, Clarke concluded that the public's trust in science might be the ultimate victim:

There are concerns that this approach could ultimately undermine trust in public health. In the US, says West, trust in the government and media is falling. He explains, 'Science is still one of the more trusted institutions, but if you start tagging and shutting down conversation within science, to me that’s even worse than the actual posting of these individual articles.'

Category