Washington, D.C. — Social media platforms must address the threat they pose to postelection legitimacy and public safety and do more to prevent their products from contributing to chaos after the polls close, says a new issue brief released today by the Center for American Progress. CAP’s brief outlines specific recommendations for platforms—including YouTube, TikTok, Nextdoor, Facebook, Reddit, Pinterest, Snapchat, and Twitter—to consider as they develop policies for the period between Election Day and Inauguration Day.
“With more Americans than ever voting by mail in this year’s election, the counting of votes and declaration of a winner in the presidential contest or other hotly contested races could take days or even weeks,” said Adam Conner, vice president for Technology Policy at CAP and co-author of the brief. “This potentially long period of uncertainty almost guarantees that bad actors, whether foreign or domestic, will try to use a wide variety of social media platforms to sow discord or violence—or attempt to delegitimize the results. Platforms have a responsibility to affirm democratic legitimacy and protect public safety in the period following the election until Inauguration Day.”
“Platforms need to have a plan in place to confront potential confusion and disinformation in what could be a lengthy period of uncertainty after the polls close,” said Erin Simpson, associate director for Technology Policy at CAP and co-author of the report. “Platforms must quickly remove content that baselessly attempts to delegitimize the election, stop disinformation about election results from going viral, and prevent their platforms from being used to threaten public safety.”
CAP’s brief notes that with the election swiftly approaching, few platforms have sufficient standards for grappling with election delegitimization attempts and postelection conflict. The authors contend that platforms, which may have acted more conservatively on election matters in the lead-up to the election to avoid the appearance of partisanship, should reevaluate the calculus of risk management in the post-election period after votes can no longer be cast, which demands aggressive action. Platforms will have to not only remove content and accounts that incite or inflict violence but also take proactive action to detect and disrupt activity that could lead to violence.
The brief outlines the following potential policies for platforms to consider as they develop policies for this crucial time in the American democratic process:
- Remove posts that baselessly delegitimize the election: Social media platforms need to remove information that baselessly delegitimizes the election. Labeling is insufficient in preventing platform affordances from being used to destabilize the election. In order to effectively mitigate election delegitimization attempts, social media platforms must develop careful standards for its definition, preferably in coordination with one another and with advance input of democracy experts and representatives from civil society.
- Develop consistent, collaborative standards for determining election results: In order to effectively moderate disinformation around election results, platforms should develop a standard, public methodology, potentially in collaboration with one another and with relevant experts. This standard should appropriately weigh primary sources to make this determination, including initial public vote counts from election officials as well as media outlets with specialized election expertise. Ideally, platforms would develop and publish these standards to minimize public confusion during the postelection period.
- Fact-check election result claims: Platforms with fact-checking programs should fact-check all claims about election results, regardless of source, according to the standards outlined above. Moreover, because of the nature of election disputes, where candidates are increasingly willing to decry the electoral process outcome even if the process is fair, it is critical that platforms provide additional context where possible in clear and plain terms, in multiple languages, and accessible to screen readers or other accessibility aids. Additionally, the fact-checking of results claims published on social media sites should include opinion pieces and, especially, content from politicians.
- Build viral circuit breakers: CAP previously proposed a viral circuit breaker for disinformation around the coronavirus crisis: Social media platforms would program a pause in the algorithmic amplification of fast-growing content about the coronavirus in order to facilitate effective review. Platforms would prioritize the content in internal human review and fact-checking, post a warning sign that it has yet to be verified, and suspend amplification in recommendation algorithms—while allowing individual posting and message sharing to continue—until it is reviewed.
- Take steps to prevent violence on and off the platform: Platforms should take swift and proactive action to remove accounts, groups, networks, or events associated with acts of violence generally, not just violent content posted to their platforms specifically. For example, if an individual or group makes a call to arms in a video or podcast, accounts of those involved should be removed on social media platforms, even before such content is reposted on platforms.
- Build shutoff switches for product features that may contribute to violence: As a last resort reserved for a worst-case scenario, platforms should begin building “shutoff switches” that could temporarily pause product features, such as Facebook’s group recommendations, that could be used to organize violent action and/or attempts to baselessly contest the election.
Click here to read “Results Not Found: Addressing Social Media’s Threat to Democratic Legitimacy and Public Safety After Election Day” by Adam Conner and Erin Simpson.
For more information or to speak with an expert, please contact Allison Preiss at email@example.com.