Misinformation and disinformation both refer to the spread of false information, but the differences lie in the intent.
- Misinformation: False, inaccurate or incomplete information shared because people believe it to be true, not because of malicious intent.
- Disinformation: False, inaccurate or incomplete information shared with the intent to deceive and manipulate.
In other words, misinformation spreads unintentionally, but disinformation is often the result of coordinated campaigns by actors who know the information is false and have something to gain by spreading it.
Both misinformation and disinformation spread easily on social media platforms because the algorithms that help maintain platforms, such as Facebook, are trained to amplify controversial or sensational content, creating an environment where false information can thrive. These platforms also sustain echo-chambers that cause users to deeply trust information shared within their social networks of other like-minded individuals. If a friend shares a piece of misinformation, many users trust that information because they trust their friend who shared it, never digging deeper to find the true source of that information or ignoring news that — while true — contradicts their existing beliefs.
Journalists — especially audience engagement editors — face a myriad of challenges while trying to debunk viral falsehoods as they crop up. The sheer volume and speed of misinformation on social media makes it difficult to address every instance. And beyond that, rigorous fact-checking is time intensive, so the misinformation has likely already spread far and wide before newsrooms can tackle it and often reach a smaller chunk of users in the process.
That’s not to say fact-checking is a futile process. It’s a vital bulwark against the erosion of democracy. Here are some tips on how to do it well.
- Pre-bunk: Work to debunk false information before it spreads by collaborating with reporters to point out common sources of misinformation and disinformation and provide context around issues in your community that often fall prey to falsehoods.
- Explain your process: In social posts, newsletters or short-form videos, always try to squeeze in a line or two about how the fact-checking was done. Answer questions like “What sources did reporters consult? How many of them? And in what order?”
- Use accessible language: Misinformation and disinformation are deliberately spread in easily digestible, clickbait-esque formats. Avoid using jargon in your social posts and keep the wording as easy to follow as the falsehood that spread in the first place.
- Don’t repeat the exact falsehood: This only reinforces the falsehood in audiences’ heads. And because they’ve likely been exposed to the information longer than your fact-check, they will be more likely to remember the false information than the correct information.
Guides & Best Practices
First Draft News
“How journalists can responsibly report on manipulated pictures and video” by Victoria Kwan
This article was adapted from a larger guide on how to responsibly report and disseminate information in an era marked by viral falsehoods. This article is specifically helpful for engagement reporters and social newsgathering teams looking for a refresher on how to flag manipulated audiovisual content — and what to keep in mind as you debunk it.
First Draft News
“A guide to pre-bunking, a promising way to inoculate against misinformation” by Victoria Kwan
This step-by-step guide provides a comprehensive framework for how to pre-empt misinformation, including notes on when and how to do so, plus ways to increase a pre-bunk’s reach. This is an excellent starting part for engagement journalists who want to start strategizing across their newsrooms for how to systematically approach misinformation.
First Draft News
“How journalists can avoid amplifying misinformation in their stories with overlays”
Written in tandem with a primer on when to deploy visual misinformation cues, this guide explains how to design accessible fact-checking overlays to debunk viral visual falsehoods on social media. This is a good starting point for social media editors looking to build templates for posting election content.
“12 principles designers should follow for labeling manipulated media” by Emily Shaltz, Tommy Shane, Victoria Kwan, Claire Lebowicz, and Claire Wardle
Though created with tech developers and user interface designers in mind, engagement professionals and SEO producers can borrow these principles for how to classify misinformation and disinformation debunks across platforms. This guide covers how to tease more information for audience members who want it and how to create a consistent — but flexible — labeling system.
Center for Democracy and Technology
“How to spot and counter online voter suppression” by Emma Llansó and Ben Horton
This guide outlines the different types of virtual voter suppression techniques alongside a checklist for social media and engagement journalists to use while creating proactive and reactive voter participation content. This is a great resource for journalists seeking ways to tamp down voter suppression efforts as they crop up or ways to empower consistent civic participation.
“What’s the best way to deal with a flood of misinformation? Maybe it’s time for some deliberate ignorance” by Joshua Benton
This article argues that news organizations can better fight misinformation by ignoring low-level or seemingly innocuous falsehoods, and instead dedicating those resources to covering credible threats. It includes an infographic that can be adapted by engagement journalists, reporters and editors to use when developing criteria for what misinformation or disinformation is newsworthy enough to cover.
Assistance & Training
The Knight Center for Journalism in the Americas at University of Texas at Austin offers Information and Elections in the Digital Era, a free five-week, self-paced online training that covers how journalists can best address hate speech, misinformation and disinformation during election cycles. It focuses on both preventative and corrective measures.
The Poynter Institute offers a suite of free or low-cost online and self-paced fact-checking and social-verification trainings for editors and reporters that focus on accuracy in the digital age and how misinformation spreads on Spanish language social media in the U.S., which can be useful for engagement editors looking to refresh skills for their staffs or begin brainstorming bilingual election strategies.
First Draft News sunsetted in January 2023 as the team moved to start the Information Futures Lab at Brown University, but their website exists as a repository for guides, virtual simulations and trainings on how to track, verify and report on misinformation and disinformation as it occurs across platforms.
The Trust Project is a global network of news organizations that came together to produce Trust Indicators, a series of eight criteria used to vet credibility and reliability of articles from different news websites. Trust indicators can be useful for social media editors looking to synthesize content from outlets beyond their media market for X threads, newsletters and so on about issues now impacting their community.
Election Integrity Partnership for Nieman Lab
“What makes an election rumor go viral? Look at these 10 factors” by Kate Starbird, Mike Caulfield, Renée Diresta, Emma Spiro, Madeline Jalbert and Michael Grass
“Journalists are often used in AI-generated misinfo, but cheaper tricks may still be more troubling” by Ren LaForme
“People share misinformation because of social media’s incentives — but those can be changed” by Ian Anderson, Gizem Ceylan and Wendy Wood
“Psychological inoculation improves resilience against misinformation on social media” by Jon Roozenbeek, Sander Van Der Linden, Beth Goldberg, Steve Rathje and Stephan Lewandosky
Journal of Applied Communication Research
“Fake news by any other name: phrases for false content and effects on public perceptions of U.S. news media” by Jessica R. Collier and Emily Van Duyn
“You Are Fake News! Factors Impacting Journalists’ Debunking Behaviors on Social Media” by Magdalena Saldaña and Hong Tien Vu