Blog

The War on Misinformation and The Elephants on the Battlefield

The War on Misinformation and the Elephants on the Battlfield:

View are totally my own.

War on Misinformation

In the health comms section of my blog, you’ll notice that I write a lot about the war on misinformation and have been doing so since the beginning of the pandemic. Fueled by the Election of 2016 and COVID-19, there’s a massive, panicked movement to purge the digital world of misinformation. If you scroll on Twitter long enough, you’ll see phrases like, “Misinformation Kills” at least once, probably twice. Some view misinformation like an infectious disease that spreads from one brain to the next. Armies of “Fact Checkers” have bravely deployed to websites to fight the enemy that is misinformation, and social media sites have tried everything from blocking, deletion, and disclaimers.

I’ve done a lot of work in scientific communications, and the word “misinformation” is often used. I’ve observed “social listening” tools in action in an attempt to collect trending misinformation and identify top misinformers. I’ve watched recommendations get made in real-time in an attempt to stop misinformation and I’ve seen people’s accounts get labeled by health officials, even though these people have no idea they are receiving digital scarlet letters. And to be honest, it all felt a little sloppy, unscientific, McCarthyist and sleazy to me. I only speak for myself here, but I’ll briefly explain some of the Elephants I’ve seen on the battlefied:

I have yet to find an agreed-upon, objective, evidence-driven definition for misinformation: 

If you’re going to fight something, especially in a way that may limit someone’s speech, you should at least be very clear about what it is you’re fighting. Sometimes it’s simple to point to information and call it misinformation, but other times it’s more difficult. For example, is politically-motivated information misinformation? Is a religious belief misinformation? Is a personal anecdote misinformation? Is a simplification misinformation? Is a challenge to a consensus misinformation? During an international conflict, who is spreading misinformation? Which “side” is wrong? (For example, think about what the US gov said before we invaded Iraq or started a ten-year war in Vietnam. Or think about the Israeli-Palestinian conflict.) Further, what is the objective test for deciphering dissent from misinformation? Even the surgeon general’s report on misinformation stated that not everyone agrees on how to define it. In its references section on page 17, it states:

“Note: Defining “misinformation” is a challenging task, and any definition has limitations. One key issue is whether there can be an objective benchmark for whether something qualifies as misinformation. Some researchers argue that for something to be considered misinformation, it has to go against “scientific consensus” (e.g., Chou, Gaysynsky, & Cappella (2020)). Others consider misinformation to be information that is contrary to the “best available evidence” (e.g.,Johns Hopkins Center for Health Security (2021)). Both approaches recognize that what counts as misinformation can change over time with new evidence and scientific consensus. This Advisory prefers the “best available evidence” benchmark since claims can be highly misleading and harmful even if the science on an issue isn’t yet settled. At the same time, it is important to be careful and avoid conflating controversial or unorthodox claims with misinformation.”

Misinformation is almost always portrayed as an online problem, but that only captures a sliver of a human’s information diet: 

All of the social listening reports I’ve seen only focus on online information. It makes sense, because you can capture online data. But you can’t capture all of the other conversations and exposures to information happening in a person’s life, nor can you weigh them in terms of power and influence. In short, social listening tools can parole social media sites and screenshot “misinformation” and “misinformers” but they can’t screenshot your conversations at the dinner table, in a classroom, the gym or at the local bar. Most of the data on misinformation is from the digital exchange of information, which, in my opinion, is just a sliver of the information pie. Folks will counter with, “Well, wrong information spreads like wildfire online!” Even if it did…, can you prove that it significantly changes a person’s belief or behavior? I haven’t seen those studies. And unless you can show me that online misinformation has significantly more influence over a person’s choices or behaviors than, say, family dinner-table chatter, a classroom rumor, a church homily, subway conversations, a conversation in a doctor’s office, or what they see on TV, I remain hesitant to spend millions of dollars on campaigns against online misinformation, particularly when it may infringe on a person’s freedom of speech and particularly when we could be spending that money on causes with more predictable and measurable benefits. Further, as a public health professional, I can tell you that there is never enough funding in public health, so if you have limited funds to begin with, at least make smart, surgical decisions with your mula.

“Misinformation kills”? Okay, Show Me How You are Measuring That.

This might read like a controversial statement, but I don’t mean it to be. I mean, show me that it kills: Show me that the war on misinformation is justified, because a significant number of people who are exposed to misinformation, take action on that information and end up dead. I’ve seen provocative anecdotes suggesting as much online, for example, the story about the guy drinking bleach after Trump said something about bleach and COVID, but I have yet to see this conclusion presented in a scientific, systematic way that didn’t feel like it grossly simplified how humans interact with information. I feel like the bleach-drinker would have ended up with a Darwin award for myriad reasons, the nail in the coffin just happened to be taking Trump’s bleach comment to heart. (Do gullible people get killed more often? Perhaps something to look into.)

I also think it’s really difficult to prove a causal relationship between a specific bit of information and someone’s death, while controlling for all confounders (such as an underlying, fixed belief…, a baseline of distrust…, and other sources of information in a person’s life).  Misinformation Kills is a popular slogan, often said in good faith by folks who want to “follow the science” and save the world, but does the statement itself follow the science? How solid is the measuring tool for such a statement? It’s also intriguing to me how misinformation plays the scary, trending “Killer” when one could have easily cast other things in the role, particularly from a public health point of view. For example, things that “kill” people include: a for-profit health system that is unaffordable for many, an underlying trust in someone or something that may be harmful, obesity, sitting too much, peer pressure, bullying, chronic stress, the epidemic of loneliness, air pollution, or highly processed carbohydrates. My point is that there are a lot of killers out there. Heck, the slogan could be “(Insert here) Kills.” “Smoking kills” was a popular slogan for a while, but of course, there is more objective evidence for that statement.

Exposure to Information does Not Translate to Belief or Behavior: 

In all of the social listening reports I’ve seen, misinformation data-gathering focuses only on social media sites, top accounts that spread misinformation on those sites, and numbers of likes, views, and shares. I have yet to see a social listening report that includes whether or not anyone took action in real life based on the online virality of a specific piece of misinformation. If misinformation is truly dangerous, then show me that exposure to a specific piece of information leads to a change in belief or behavior. Show me that likes and shares equal a change in belief or behavior, and that the behavior is, not in fact, linked to an underlying belief system. And while anecdotes are nice, show me that it does this on a significant scale, while honestly saying that confounders or flaky measuring tools are not a significant issue. Until I see that, I’ll have a difficult time supporting limited funds and the allocation of  limited resources to the war on misinformation.

A bit outside the “public health” realm, I remember the passionate campaign to wipe Russian disinformation from the internet due to it interfering in our 2016 election. Yet this 2023 study in Nature found that Russian disinformation did not significantly impact individual attititudes or voting behaviors. Now what?

Treating Misinformation like an Infectious Disease Denies Our Ability to Reason:

Some sci comms folks who work in public health refer to the spread of misinformation as an infodemic. Yet when a person is confronted with information, he/she thinks about, processes it, talks it over with other people. Exposure to information does not equal planting beliefs in someone’s head. There is no “cellular takeover” like there is with an actual infectious disease. Like, I can’t reason away COVID-19 out of my cells, but I can reason away that the vaccines are actually microchips meant to track us.

Also, humans are complicated. Even if a person shares misinformation, there’s no way to tell if a person is doing that because he/she actually believes the misinformation, is going along with a party line, is attempting to win favor with a particular person or group of people, or wants to  feel part of something, etc. I’ll share a story about online sharing just to show how bizarre it can be. One time a West Point classmate of mine took a picture of my cat Awol dressed in a cadet uniform (originally designed for a teddy bear) that was posted on my Facebook page, uploaded it to his own page, shared it with his followers and wrote, “Doesn’t my cat look cute?” It was the strangest thing, since Awol was MY cat, and I still have no idea why he did it. Like, he never even met my cat or talked to me much. All I can say is…we’re weird…humans are weird. Often our motivation for doing things is not so obvious.

Of Course, Online Misinformation is an Issue, We Gave Everyone the Internet: 

Sometimes I chuckle when people are surprised or shocked by misinformation online. Um, hello? We gave everyone the internet. Look around you. Think about alllll the people out there. Now, tell me again why you’re shocked? And please think about the massive effort it would take to control what everyone posts and/or sees on an endless, ever-expanding digital landscape.  Is that the best use of precious time and resources? Is there a more cost-effective approach that doesn’t entail an endless, unwinnable war?

Putting the Government in Charge of  Misinformation May Do More Harm than Good: 

I often hear that “public health people” should have done more to combat misinformation. By “do more” I assume they mean regulate it online. And by “public health people” I assume folks are referring to national, state and local government employees. The problem here is that there is a large portion of the population that distrusts the government, particularly in the United States. Public health addresses the entire population, not just folks who share similar beliefs and political views, so you need to “hold your bias” and take other views and reactions into consideration. You need to be communication artists, not communication shamers, and find a way to connect to all populations.

Public health is often intertwined with the government, so it can feel like a married couple that shows up to a party where folks can tolerate one but despise the other. And, let’s face it, there is reason to not fully trust governments, particularly the US Government. Our government lied to us about weapons of mass destruction that got us into a long war that killed a lot of people, ran a Hepatitis vaccination campaign in Pakistan with the sole goal of gaining intel on Bin Laden & consequently lead to widespread vaccine hesitancy in Pakistan, and was instrumental in propagating the opioid epidemic, due to not executing their duties as regulators. Historically, some of the government lies told to Black people were horrific. Then there is the corporate money flowing into our government that certainly influences our elected officials and national policies. In short, it’s not difficult to find examples of why people do not trust the government. And in that sense, putting the government (or even government dollars) in charge of regulating online misinformation is a bit like putting the boy who cried Wolf in charge of verifying wolf sightings: They might get it right some of the time, but there’s a history there…

Take-Home Points:

The above-mentioned bold points are just some of the Elephants I see on the misinformation battlefield. They are observations I’ve made via my own work in scicomms. And for the record, I’m not implying that misinformation isn’t a problem or can’t get someone killed, I’m just saying that the science of what misinformation is and its impact should perhaps evolve before we go full-throttle on expensive countermeasures, especially with policies that have the potential to limit a person’s freedom of speech. The potential backfire of such policies in a democratic society may do more harm than good, and I can’t imagine such policies would do anything good for building trust.

“But there is a lot of wrong information online, Eeks!” 

That’s true. There’s also a lot of misinformation exchanged during in-person conversations. For example, I recently recorded a podcast with a researcher who studies the Amish community (a population that traditionally does not embrace digital devices or the internet), and talked about their problem with misinformation. My 2 cents is instead of trying to control the exposure, which I believe is a losing battle given the scope of a person’s informational landscape, provide people with tips on how to assess information. I have no idea if anyone will find this helpful, but here is my cheat sheet on how I assess information and decide whether or not to believe it, think about it some more, and/or share it.

Thanks for reading this piece on the war on misinformation.

Feel free to check out my Causes or Cures Podcast too! Here’s the link.  :)

 

Pin It on Pinterest

Share This