Center for American Progress

RELEASE: With COVID-19 Mis- and Disinformation Running Rampant, New CAP Report Outlines Preventive Recommendations for Platforms To Combat Deceptive Content
Press Release

RELEASE: With COVID-19 Mis- and Disinformation Running Rampant, New CAP Report Outlines Preventive Recommendations for Platforms To Combat Deceptive Content

Washington, D.C. — From misleading information about the symptoms of the coronavirus to full-scale, conspiracy-theory style videos, misinformation and disinformation about the coronavirus has run rampant on social media platforms in recent months. To protect public health and prevent malicious actors from capitalizing on fear and confusion, today, the Center for American Progress released a new report outlining preventive recommendations for platforms to combat coronavirus mis- and disinformation. CAP’s report notes that while long-term regulatory action will be needed, in the near term, platforms must do more to reduce the harm they facilitate, particularly regarding the coronavirus.

“Reactive efforts are not enough. Social media companies can—and must—change product features that create an environment where disinformation thrives,” said Erin Simpson, associate director of Technology Policy at CAP and co-author of the report. “Everything should be on the table. Product changes that add context and friction could reduce the creation and spread of harmful information, and virality circuit breakers could help minimize damage once it’s live.”

“On a day when we know the sequel ‘Plandemic’ is set to be released, it is clear that merely reacting to disinformation once it has gone viral is not enough,” said Adam Conner, vice president for Technology Policy at CAP and co-author of the report. “Adding friction may horrify a generation of product managers and designers, but COVID-19 mis- and disinformation shows that there are far worse things online than a little user friction.”

CAP’s report outlines product-level changes social media platforms could take focused on increased context and friction to proactively slow the tide of pandemic mis- and disinformation. Product-level changes would adjust the content users see and the interfaces they use to interact with that content, such as changes to the interface on sharing a post or how the information in that post is presented. These recommendations include:

Introducing front- and back-end friction

  • Parallel to financial market circuit breakers, platforms should develop virality circuit breakers. Trending coronavirus posts that have indicators of mis/disinformation should trigger rapid review by content moderation teams and get
    prioritization within fact-checking processes.
  • Fast-growing coronavirus content should trigger an internal circuit breaker that temporarily prevents the content from algorithmic amplification in newsfeeds, appearing in trending topics, or other algorithmically aggregated and promoted avenues while still allowing individual posting or messaging.
  • Fast-growing coronavirus content that is unchecked should cause a generic warning to pop up, such as “This content is spreading rapidly, but it has not yet been fact-checked,” until the content is able to be reviewed by platforms or third-party fact-checkers.
  • Test multiple iterations, with short- and long-term effect observations, to ensure interventions are not generating unintended effects or causing backfire effects.
  • Retool video autoplay queues to play only authoritative videos regarding the coronavirus.
  • Serial producers or sharers of coronavirus mis/disinformation should be removed from recommendation algorithms for accounts to follow/friend and as groups to join.
  • If violations continue over time for serial producers/sharers who are notified of coronavirus mis- or disinformation, existing members or followers should be notified of repeated violations and forced to choose whether to stay/follow or leave/unfollow.
  • Platform distribution algorithms should also take the sharing of content later found to be mis- or disinformation into account in determining future distribution, notifying and docking future distribution for accounts that have shown to have a history of repeatedly spreading mis/disinformation.
  • Develop scan-and-suggest systems to proactively discourage coronavirus mis- or disinformation. For draft, prepublication content that appears to violate terms around known areas of important or harmful coronavirus mis- or disinformation, alert users of potential violations and ask if they’d like to revise their post before publication. Such a strategy could scan information in text-based posts, as well as captions for photo or video content.
  • For draft content that appears to violate terms around known areas of important or harmful coronavirus mis- or disinformation, alert users to credible fact-checking resources relevant to the topic.
  • For accounts that frequently distribute coronavirus misinformation, implement an “Are you sure you’re not spreading false information about COVID-19?” click-through cue before a user can post, share, forward, or publish content. This intervention could appear progressively more frequently for accounts that continue to share mis- or disinformation.
  • For platforms with direct-messaging capabilities, limit the number of times a message can be forwarded simultaneously.

Providing better and deeper context

  • For posts on topics related to the coronavirus, automatically append links to information sources or dedicated information centers.
  • For posts on topics related to coronavirus mis- or disinformation, build in side-by-side displays of an appropriate fact check.
  • Provide basic contextual information about the poster, including details such as location, relationship to the reader, duration on the platform, institutional affiliations, history on the platform, and verified expertise in key areas of public interest.
  • Provide contextual information about the source of third-party content, such as content publication dates, host sites, details inferred from top-level and second-level domains, on-platform details or information from verification programs about the third party, whether the URL is frequently fact-checked on a platform, or off-platform information drawn from groups with strong verification processes such as Wikipedia.
  • For in-feed content on topics trending with coronavirus mis/disinformation, take a “truth sandwich” approach by pairing them with quality in-feed sources before and after potentially harmful posts.
  • Label accounts that repeatedly share false or misleading information about COVID-19.
  • Label posts that link to outside sites whose content is repeatedly fact-checked as false or misleading on a platform.
  • Carry over any warning labels or fact checks when harmful content is shared across platforms.
  • Provide contextual clues about how the content has been shared or promoted on-platform or across platforms.
  • Label accounts of look-alike media sites that falsely present themselves as legitimate journalistic outlets.
  • Notify publishing accounts who post COVID-19 mis/disinformation of the offending content and the relevant fact check.
  • Notify users who view or interact with COVID-19 mis/disinformation about the specifics of their interaction, the offending content, its relationship to platform terms, and a relevant fact check.
  • For high-reach accounts, platforms should provide information about the provenance, history, credentials and/or follower composition.
  • For direct messages, provide forwarded labels on any forwarded messages.
  • For posts on coronavirus mis/disinformation that do not merit removal under terms or standards, provide labels on the post and the account, in addition to side-by-side fact-checking suggestions.
  • For offsite video content—or all video content—include pre- or post-roll credible content.
  • For videos on key coronavirus mis/disinformation topics, include a TV news-style ticker warning of frequent mis- or disinformation on this topic and link out to relevant fact checks.
  • For verified experts in key domains, provide domain-specific verification labels.
  • Major social media platforms should compensate any independent entities whose work is used to help provide quality, contextual information, including fact-checking organizations, Wikipedia, and independent media groups—even if licensing allows free use.

Click here to read “Fighting Coronavirus Misinformation and Disinformation: Preventive Product Recommendations for Social Media Platforms” by Erin Simpson and Adam Conner.

For more information on this topic or to speak with an expert, contact Allison Preiss at [email protected].