Report

Protecting Democracy Online in 2024 and Beyond

A series of high-profile global elections in 2024 will require social media platforms and generative AI developers to meet the moment amid an evolving and uncertain technology landscape.

In this article
Photo illustration shows Elon Musk's face with the EU flag overlaid, on a smart phone sitting on a keyboard
In this photo illustration, Elon Musk, with the flag of the European Union, is seen displayed on a smartphone screen in Athens, Greece, on March 8, 2023. (Getty/Nikolas Kokovlis/NurPhoto)

Introduction and summary

In 2024, more than 2 billion voters across 50 countries—including in the United States, the European Union, and India—will head to the polls in a record-breaking number of elections around the world.1 Nearly a decade after social media was weaponized to influence election outcomes and with the technological advancements of today, such as generative artificial intelligence, poised to worsen or cause new problems, it is more prudent than ever that technology platforms and governments do everything in their power to safeguard elections and uphold democratic values online. The reality of today’s technology and social media landscape paints a stark picture of platforms underprepared for the year ahead against a backdrop of unforeseen, novel challenges alongside known threats. Meanwhile, the prominent parent companies of many major social media platforms, known colloquially as Big Tech, have retreated from the election protection measures put in place in 20202 and initiated layoffs that have affected trust and safety teams across the industry,3 leaving them less prepared for a year of back-to-back and high-profile elections than perhaps ever before.

The Center for American Progress has previously published reports identifying major threats to digital democracy and recommending steps that social media companies should take to mitigate them—most recently in 2022, with “Social Media and the 2022 Midterm Elections: Anticipating Online Threats to Democratic Legitimacy.”4 And earlier, in 2020, the report “Results Not Found: Addressing Social Media’s Threat to Democratic Legitimacy and Public Safety After Election Day”5 anticipated post-election delegitimization and real-world violence, suggesting product approaches to reduce harm.

In a world without standardized global social media regulation, ensuring elections are safe, accessible, and protected online and offline will require key actions to be taken ahead of any votes being cast.

This new report specifically anticipates risks to and from the major social media platforms in the 2024 elections, continuing CAP’s work to promote election integrity online and ensure free and fair elections globally. The report’s recommendations incorporate learnings from past elections and introduce new ideas to encourage technology platforms to safeguard democratic processes and mitigate election threats. In a world without standardized global social media regulation, ensuring elections are safe, accessible, and protected online and offline will require key actions to be taken ahead of any votes being cast—both in 2024 and beyond.

Glossary: Key election time periods

In this report, references to democratic process and elections encompass the following time periods:

  • Election cycle: The period of time beginning the day after certification of the previous general election for a given office and ending on the date of the next general election for that office.6
  • Voting periods: The period of time including early in-person and vote-by-mail periods leading up to Election Day and day-of voting at polling places.
  • Ballot processing: The period of time when election officials aggregate all ballots cast, such as mail-in ballots, Election Day ballots, and early in-person ballots; pre-process mail-in ballots; process provisional ballots; verify signatures; count valid ballots; and carefully double-check that every valid ballot cast has been counted.7
  • Majority of news and official sources: The period of time when elections, especially ones that are not close, are called by news organizations with dedicated election decision desks, and official election administration sources announce uncertified results prior to official results being certified. This can also be when close races are called by news organizations with dedicated election decision desks up to several days later but still before a canvass or certification is completed.
  • Certification of results: The period of time during which election officials ratify ballot counts and officially declare winners. In most U.S. elections, that is the end of the process. However, in the U.S. presidential election, after states certify their election results, they appoint electors in accordance with the results. The electors meet in mid-December to cast votes for president and vice president, which are then sent to Congress. Congress meets to tally the Electoral College votes and officially declare the winner.8
  • Transfer of power: The period of time between when winners are declared and are sworn into office. For the U.S. presidency, the process is guided by the Presidential Transition Act of 1963, which requires the orderly transfer of executive power in connection with the expiration of the term of office of a president and the inauguration of a new president.9

It is important to acknowledge that mitigating and addressing these threats to digital democracy is not solely the responsibility of large technology platforms and social media companies. Despite this, some political leaders have chosen to embrace election denial, promote violence, and remain uncooperative to upholding their responsibility to democracy. While resources such as Integrity Institute’s “Election integrity best practices”10 offer a general guide for how platforms can responsibly support elections online, this is a broad and wide-reaching societal issue that requires governmental and private sector cooperation to address.

To this end, the below combined recommendations across five categories seek to specifically help companies be prepared for potentially damaging harms to their users, reputation, and general democratic processes while ensuring destabilizing events—such as those of January 6, 2021, in the United States and January 8, 2023, in Brazil—are not repeated.

See also

In recent months, artificial intelligence (AI) has come to light as a new vector for potential harms to democratic and fair elections. It has significant potential to exacerbate existing threats such as bots, harassment, and disinformation and makes it exponentially more difficult to accurately detect manipulated media, also known as deepfake, content.11 Perhaps most worrisome is the impact this rapidly advancing technology may have on threats that are yet to come to light, including those related to 2024 elections and AI companies’ apparent lack of preparation in anticipating how their tools may be used to influence elections—though hiring staff to address these concerns now.12

This report begins to address some AI-related concerns, but CAP will delve into this topic in greater depth and provide further policy recommendations in a future report ahead of 2024 elections. Additional recommendations on how the U.S. government can meet the AI moment can be found in CAP’s “Priorities for a National AI Strategy” framework.13

Platforms are often focused on prioritizing resources based on region or size of market, but given the high volume of elections in 2024,14 it is prudent to consider these elections chronologically and apply protections for all of them, to the extent possible. Lessons learned should be shared from one election to the next, ultimately positioning platforms to be most prepared for the U.S. general election in November 2024.

The 3 Ps: Policy, process, and protocol

Perhaps the most important set of recommendations to help platforms uphold democratic processes involves shoring up the policies, processes, and protocols—known as the three Ps—that platforms rely on in the periods before, during, and after an election. These systems protect users against damaging harms, elevate accurate and relevant election information, and enable critical emergency mitigations should a need arise.

Policy

  • Develop and deploy emergency mitigations responsibly: Platforms should develop and articulate clear and defensible break-glass criteria for deploying election risk protection mitigations in emergency situations. These mitigations may include kill switches for entire product surfaces such as Instagram reels, YouTube video recommendations, or Facebook’s “Popular Near You” content; product policy exceptions; adjustments to algorithmic ranking; or other significant changes. Platforms should ensure these criteria extend beyond poll closure for as long as is necessary to protect against the given election harm. Prior to deployment, they should verify that criteria to do so set precedent that can be externally defended, explained publicly, and repeated if necessary, while taking steps to practice deploying to ensure timely and safe rollouts. Platforms should clearly document these decisions for later public release and analysis.
Policy example

The entire Facebook recommendations module, “Groups You Should Join,” should be removed for all users in an election period to prevent groups from rapidly growing to unconnected audiences and to reduce the potential spread of viral election-related misinformation content within them.

  • Consistently apply civic policies: Platforms should consistently apply civic integrity policies covering content and/or behaviors to all content formats—such as those on YouTube15 and across Meta16 products—as a means of combating election disinformation.
Policy example

Civic integrity policies should be extended to live video, long- and short-form video surfaces, and any new product surfaces that are publicly available to any number of users, such as Instagram reels.

  • Review reports on civic content and entities against all policy areas: Platforms should ensure reports on content or entities originating from designated civic accounts—those identified by a platform as representing a candidate, politician, party, elected official, or government account—are routed to a specific review queue with dedicated, appropriate staffing with specialized expertise and cultural context. This can help ensure immediate, real-time review of any potential violations against all policies. Notably, this recommendation does not call for different rules to apply to certain accounts; political candidates should not be exempt from policies nor allowed to break rules.
Policy example

Reports against a YouTube channel for a U.S. congressional candidate should be reviewed by a knowledgeable, trained reviewer in a dedicated queue and within a timely manner.

  • Act on violative manipulated media: Platforms should ensure manipulated media policies17 adequately cover material generated from AI, such as deepfakes, have public figure carve-outs and can withstand election-related misinformation with a requirement to debunk content within a certain time frame of it being reported. This should include violative organic content and advertisements produced by generative AI (GAI), including large language models (LLMs). Platforms should have a range of available interventions—including removal, context, labels, friction, and more—to mitigate harms caused by this content and should consider proximity to an election in reviewing the content.
Policy example

If a deepfake of a presidential candidate claiming to withdraw their campaign circulates on social media, platforms should urgently prioritize review of the media against manipulated media policies and, if found violating, remove all copies, with the exception of counterspeech,18 and prevent future uploads.

  • Prohibit prior and future election delegitimization: Platforms should uphold all past election denial policies that prohibit delegitimization of prior elections. Specifically, they should prohibit monetization and recommendation eligibility of past or future election delegitimization content and disallow paid advertisements or promoted posts that promote election denial.
Policy example

2020 “big lie” policies should not be reversed further, as was most recently done by YouTube in June 2023.19

  • Provide sufficient notice for advertisement restriction periods: Ahead of plans to enact a restriction period for political ads and/or disable the creation of new political ads in an election period, platforms should provide advertisers and users sufficient notice of any restrictions.
Policy example

In November 2022, Meta implemented a restriction period for ads about social issues, elections, or politics in the United States for the week leading up to Election Day.20

Political ads

For the purposes of this report, the terminology “political ads” encompasses all social issues, elections, and politics advertisements placed on social media platforms. Ads about social issues, elections or politics are:21

  • Made by, on behalf of, or about a candidate for public office, a political figure, a political party, a political action committee or advocates for the outcome of an election to public office; or
  • About any election, referendum, or ballot initiative, including get-out-the-vote or election campaigns; or
  • About social issues in any place where the ad is being placed; or
  • Regulated as political advertising

Social issues are sensitive topics that are heavily debated, may influence the outcome of an election, or result in/relate to existing or proposed legislation.22

Process

  • Define and standardize election periods and their risk levels: Platforms should establish clear and defensible high-, medium-, and low-risk periods that articulate the degree and severity of risk mitigation each platform is willing and able to deploy, including against trade-offs of user risk, reputational risk, and risk to democratic process. Companies should employ strategies for the pre-election period and Election Day, as well as the time periods following poll closures but before winners have been declared, after winners have been declared by news outlets, and following election certification and the transfer of power. Each of these time periods has unique risks and potential harms to the outcome of elections, and therefore, platforms should employ appropriate mitigations to address them.

Example risk categories that may warrant certain threat mitigations are outlined below.

  • Election cycle: Low (the specific risk to the integrity of the next election is low across the whole cycle, but medium and high during the other periods)
  • Voting periods: Medium to high
  • Ballot processing: High
  • Majority of news and official sources: High
  • Certification of results: Medium
  • Transfer of power: Medium
  • Anticipate democratic destabilization and apply learnings from 2021 and 2023: Platforms should build and fortify processes in advance to be ready should another attempted insurrection occur, such as those of January 6, 2021, in the United States and January 8, 2023, in Brazil. These processes should specifically incorporate plans on enhanced content moderation, rapid-response action to reduce misinformation virality, and harm reduction to users in election time periods.
  • Expand external partnerships with election authorities: Platforms should expand third-party fact-checker and trusted flagger programs to include election authorities in all high-risk 2024 election markets.23 This enables election authorities to quickly escalate content that may be illegal or otherwise pose informational integrity risks to users during sensitive election periods. While these programs have been under fire from some political actors and elected officials,24 they remain essential and require clear guidelines and transparency to maximize public trust while minimizing bad faith attacks.
  • Audit machine learning content classifiers for political bias: Platforms should audit automated content classifiers in high-risk election markets for political bias and publish a broad analysis of results along with commitments to correct the biases at least one month before election day in each market. They should also commit to publishing the analysis in a standardized format shared across platforms.
  • Audit the civic graph: At least six months prior to the election, platforms should conduct a comprehensive audit of the civic graph, or the list of accounts companies consider to be political in nature, such as public figures, candidates, elected officials, political institutions, and governmental bodies. They should ensure no duplicates, fake accounts, or incorrect inclusions in all high-risk markets and cooperate to share those authoritative sources with other platforms and/or the public. This is especially critical to maintain public trust given the recent devaluation of X’s (formerly known as Twitter) verification program.25
  • Audit civic corpuses: Platforms should audit legacy civic corpuses—lists of words used to identify content to block, make ineligible for content recommendation, remove from ads, etc.—in all high-risk market languages used in past elections as block lists and deny lists, for example. They should also ensure civic corpuses are updated and that they adhere to current civic policies and are accompanied by appropriate operational guidelines.
  • Review political ads and enforce ad policies in a timely manner: Given the nature of time-sensitive election periods, it is critical that platforms review political advertisements on a high-priority and rolling basis to prevent critical delays or, conversely, inappropriate advertisements from getting posted.

Protocol

  • Fact-check electoral content: Platforms should prioritize the fact-checking of electoral content, including political advertisements and content originating from public officials, utilizing a mixture of tools such as third-party fact-checking programs and community notes.26
  • Expand and enhance accurate information workstreams: Platforms should bolster accurate information workstreams on all election-related content, including election information centers with polling place data. Moreover, they should actively update and maintain all accurate information workstreams within the Civic Information API (maintained by Google).27
  • Anticipate brigading or attempts to take over content surfaces: Platforms should establish protocol for coordinated brigading or other planned informational attacks on hashtags and other content aggregation surfaces. They should also ensure platforms have the appropriate technical tooling and escalation pathways to respond in the event of an incident of this nature.
  • Reinforce the Oversight Board’s mission and expand capacity: The Oversight Board, which was created by Meta to address some of the content on some of its platforms,28 should aggressively press to expand its mandate and jurisdiction over Meta properties and products. The Oversight Board should prioritize election-related cases and allocate additional resources to reviewing these cases in a timely manner to pass lessons and policy updates onto subsequent 2024 elections. The Oversight Board should demand explicit clarity from Meta that it has oversight over Threads and should be granted the ability to examine advertising policies and advertisements on each of Meta’s platforms.
A note on Meta’s newest platform: Threads

With more than 100 million downloads in less than a week, the successful launch of Meta’s newest platform, Threads, has been touted as the fastest-growing social media platform in history.29 Statistics are popping up all over the internet highlighting its record-breaking expansion, even juxtaposing its growth with other rapidly developing consumer technologies of today, such as GAI.

Even as this rapid expansion has cooled, Meta still has a profound responsibility to uphold legacy trust and safety values, protect users from harm, and ensure Threads is a respectful and positive space online. As Meta grapples with these responsibilities, it should also consider the role Threads will play in election-related discourse next year.30 While it may not currently be the platform to discuss social issues, politics, and democracy, it may evolve between now and the end of next year based on shifts in user behaviors, public discourse, and the competitive landscape.

Below are Threads-specific questions for Meta to consider, noting it has already started to address these publicly, as it builds additional features and expands its availability to more countries globally.

Policy application:

  • What existing Instagram content and platform policies will apply to Threads content?
  • How will certain policies be changed or enhanced to cover text-specific violations?
  • Are there opportunities to utilize legacy policies from Facebook Blue, which have generally had to cover text-based posts in ways that Instagram has not, in new ways for Threads?
  • Which policies are explicitly not applicable to Threads use cases?

The civic graph:

  • Have Threads accounts been mapped to the civic graph?
  • Will these accounts be similarly cross-checked or receive additional review when reported?

Emergency election mitigations:

  • Does Meta need to build bespoke election mitigations for Threads?
  • What will and will not carry over from Instagram?
  • Have these been tested in advance of election periods?

Advertisements, including political ads:

  • While Threads does not have ads yet, what will the future state of ads, including political and social issue ads, look like?
  • Will Facebook, Instagram, or some other policies apply?

Third-party fact-checking and trust flagger programs:

  • Do these exist for Threads today? Is the infrastructure to support fact checks in place?
  • Does Threads support information overlays and friction to warn users?

Election hub:

  • Will Threads have a dedicated elections hub?
  • Will references to authoritative information redirect to a different Meta elections hub?

Oversight Board:

  • Will the Oversight Board hear Threads cases?
  • If not, should Meta expand the board’s mandate to include Threads?

Transparency:

  • Will Threads content actions be incorporated into Meta transparency reports?
  • Will it have a dedicated report?

Staffing:

  • Are there new teams being stood up to review Threads content?
  • Are existing operations teams and reviewers also responsible for Threads?
  • How does policy training differ between the two?

Classifier performance gaps:

  • Are legacy machine learning models trained on Facebook and Instagram content performing similarly on Threads content?
  • If not, are there plans to improve precision and recall and specifically train on Threads-style short-form text?

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Transparency

CAP recognizes significant transparency efforts are already underway and recommends platforms continue to build on and expand these efforts, as well as ensure appropriate staffing dedicated toward these efforts heading into 2024, particularly given the high volume of elections in a single calendar year. Online platforms should invest heavily in updated and expanded localized election-related transparency efforts to maintain user trust and promote accurate information, including by taking the following steps:

  • Enhance existing transparency reporting: Platforms should increase frequency of transparency reports31 publication before, during, and after key elections, with additional data points on election risks/content, including dedicated reports on elections-related mitigation and content takedowns. Inclusions may incorporate political advertisements by volume and engagement, high-risk civic escalations, etc.
  • Publish election post-mortem analysis: Platforms should publish first-party and commission third-party risk analysis post-mortems of high-profile election escalations that surface unmitigated threats to democracy and voters alongside commitments to plugging these gaps for future elections. These should be published following each high-risk election in 2024, and lessons should be directly applied to subsequent elections.
  • Offer greater insight into civic ranking adjustments: Platforms should be transparent about any active or planned use cases to turn on or off political or social issue content classification machine learning models on ranked surfaces, including feed, recommendations, and immersive video browsers. Importantly, this should occur during emergency and high-risk scenarios that threaten fair and free democratic processes.
  • Highlight AI use cases: Platforms should publish at least two AI reports detailing how AI is being used in content moderation and election risk mitigation efforts or AI content issues that were encountered. One of these reports should be published before elections, and the other should be published in late 2024 or early 2025 upon completion of the high-risk election cycle.
  • Standardize transparency reports: Platforms should standardize reporting formatting across companies, countries, languages, and more, while offering a clear user interface for users to query the data and download the raw data for analysis.
  • Continue investment in political ads transparency: Platforms should offer transparency into political advertisements by continuing investment in ad libraries and building data accessibility features for users with clear interfaces and data download functionality. Platforms should universally uphold the requirements from the state of Washington’s campaign finance transparency law,32 including by maintaining campaign ad records and making them available to the public. For instance, information related to the cost of the ad, the sponsor of the ad, and targeting and reach of the ad should be made available.

Staffing and personnel

Given the significant and widespread layoffs throughout the technology sector in recent years,33 it is more prudent than ever that platforms allocate appropriate staffing toward election-related issues starting in 2023 and through the high-risk elections of 2024. Companies should utilize this high-profile election season to demonstrate that previous commitments to democratic safety were not just a phenomenon during a thriving global economy and excess hiring capacity. Specifically, they should take the following steps:

  • Increase the volume of election support staff: Platforms should fortify staffing on election integrity teams, especially with language support capacity for global, non-English-language markets with 2024 elections.
  • Create or maintain horizontal election teams: Platforms should establish a unit or team to oversee elections for 2024 given the high volume of elections to ensure global language-based coverage, with a clear escalation pathway to company leadership should the need arise. These teams should be charged with carrying knowledge, lessons learned, and road maps to improve across elections chronologically.
  • Prevent additional restructuring: Platforms should prohibit further downsizing of elections or election-adjacent teams, such as those focusing on misinformation, health, integrity, etc., due to companywide layoffs or other organization restructuring events.
  • Prioritize language support: Platforms should ensure dedicated and sufficient language support for content moderation in high-risk election market, as defined by platforms, languages.
  • Be transparent about AI use in content moderation: Platforms should disclose to content moderators internally and via AI transparency reporting externally how AI is used in content moderation, specifically on how it complements human review of content and advertisements.
Related reading

External product changes

A perennial but nonetheless crucial component of election risk mitigation work involves platforms shipping user-facing product changes and enhancements directly within the user interface that clearly and simply educate users, promote civic engagement, and reduce exposure to risky or harmful content. While seemingly easy to implement, the following steps have significant potential for positive impact while enhancing stakeholder and user trust:

  • Enhance reshare friction: Platforms should introduce or maintain resharing friction to reduce the distribution of content containing election-related, fact-checked misinformation.
  • Provide additional context on election content: Platforms should employ context-adding information labels on election-related content with links to their respective election hubs—centralized, up-to-date election information centers—accurate voting info, etc.
Photo shows a screenshot of a YouTube video video an informational label on voting by mail

An informational label adds context about mail-in ballots below a YouTube video.34

  • Uprank authoritative information: To dampen the proliferation of misinformation and ensure users are receiving accurate election-related information, platforms should uprank accurate, authoritative, and localized election and voting information from trusted sources in elections-related search results.
  • Further develop civic engagement products: Platforms should continue to create and promote stickers and other ways for users to share that they voted as part of an effort to encourage civic engagement on platforms.
  • Extend warning screens to embedded surfaces: Platforms should ensure external features such as warning screens, additional context, or fact-checked overlaps are present on embedded applications—for instance, YouTube videos displayed on another website—in addition to the native site.

Researcher access

Providing researchers with access to platform data is a crucial component to ensuring platforms are held accountable throughout an election cycle. These unbiased third parties can further trust and safety initiatives by scrutinizing platforms, offering recommendations and solutions, and making their findings publicly available. Specifically, platforms should take the following steps:

  • Deepen research partnerships: Platforms should partner with prominent research organizations and white-hat hackers35 to harness company data to proactively surface election risks and commit resources to mitigate these risks in a timely manner.
  • Provide access in real time: Platforms should provide real-time access of social media data to external researchers and watchdogs.
  • Maintain access to analytic products such as CrowdTangle: Platforms should prioritize support and continuity of open-access data products such as CrowdTangle.36
  • Make APIs free and open: Platforms should ensure application programming interfaces (APIs) are free and accessible to researchers.37
  • Prioritize Digital Services Act compliance: Platforms should comply with researcher access requirements as stipulated in the EU Digital Services Act.
  • Conduct electionsspecific research: Platforms should start or expand efforts to conduct elections-specific research into platform impacts on election outcomes, polarization, and user behaviors, such as Meta’s 2020 research to understand Facebook’s and Instagram’s role in the U.S. 2020 election.38 Platforms should engage in this type of research more frequently and sooner following the conclusion of an election.

Recommendations for AI companies around elections

The development and deployment of AI, including GAI and LLMs, is moving swiftly and may outpace any other technological advancements to date. Companies building this technology, which vastly differs from legacy social media, must also do their part to mitigate elections-related threats and safeguard democracy online.

Below are specific recommendations for these platforms:

  • Create general usage and electionsspecific policies: These policies, such as OpenAI’s usage policies,39 will be critical to upholding trust and safety values and ensuring users are not misled, harmed, or abusing systems to propagate harm or dangerous behaviors. Election policies should simultaneously promote accurate information and prohibit election misinformation.
  • Be transparent about enforcement of usage policies: In addition to publishing these policies, platforms must enumerate how they will enforce them and by what mechanisms. Companies should be transparent about actions taken and mitigations deployed in pursuit of these policies via a quarterly transparency report.
  • Clearly outline first- and third-party enforcement: Usage policies should articulate differences between enforcement in first- and third-party use cases of GAI models, as well as being transparent about data-sharing between the two—for instance, whether OpenAI sees content from a third-party company utilizing ChatGPT.
  • Democratize LLMs for content moderation: GAI platforms should make their content moderation models40 available for free or, at minimum, at a reduced cost, especially when used to moderate content generated by their own AI models.
Read more on AI

Conclusion

To ensure billions of people across the globe can engage in the democratic process without interference or undue risk in 2024, social media companies and GAI developers must do their part and act responsibly to uphold democratic values and protect users and systems from abuse and harm. In combination with effective local laws, regulations, election infrastructure protections, and increased digital literacy, these recommendations underscore the importance of the role of the private sector in maintaining free and fair elections as so much of our democracy moves online.

To ensure billions of people across the globe can engage in the democratic process without interference or undue risk in 2024, social media companies and GAI developers must do their part.

The rise of political actors who undermine and refuse to accept the outcomes of elections has contributed to the degradation of trust in elections across the world. While the private sector is not solely responsible for protecting democracy, the concentration of users on a few social media platforms gives the companies that run those platforms vast influence and great responsibility for the protection of digital democracy. Similarly, the introduction of new GAI tools to hundreds of millions of users with few guardrails for use or abuse means those new companies and new technologies must quickly put in place ways to protect elections. The recommendations in this report are a starting point for that conversation, and action must begin immediately as the elections of 2024 are almost here.

Acknowledgments

The author would like to thank Adam Conner, Sydney Bryant, Rebecca Mears, Tom Moore, Greta Bedekovics, Ben Olinsky, Katie Harbath, Jesse Lehrich, Belle Torek, Chester Hawkins, Steve Bonitatibus, and Shanée Simhoni for their contributions to this report.

Appendix: Checklist for platforms

The 3 Ps: Policy, process, and protocol

Policy

Develop and deploy emergency mitigations responsibly
Consistently apply civic policies

Process
Define and standardize election periods and their risk levels

Protocol
Fact-check electoral content

Transparency

Enhance existing transparency reporting

Staffing and personnel

Increase volume of election support staff

External product changes

Enhance reshare friction

Researcher access

Deepen research partnerships

GAI development

Create general usage and elections-specific policies

Endnotes

  1. Katie Harbath and Ana Khizanishvili, “Election Cycle Tracker,” Anchor Change, available at https://www.anchorchange.com/election-cycle-calendar (last accessed September 2023); International Foundation for Electoral Systems, “ElectionGuide,” available at https://www.electionguide.org/ (last accessed September 2023).
  2. YouTube, “An update on our approach to US election misinformation,” June 2, 2023, available at https://blog.youtube/inside-youtube/us-election-misinformation-update-2023/.
  3. Alyssa Stringer, “A comprehensive list of 2023 tech layoffs,” TechCrunch, available at https://techcrunch.com/2023/08/24/tech-industry-layoffs-2023/ (last accessed September 2023).
  4. Erin Simpson, Adam Conner, and Ashleigh Maciolek, “Social Media and the 2022 Midterm Elections: Anticipating Online Threats to Democratic Legitimacy” (Washington: Center for American Progress, 2022), available at  https://www.americanprogress.org/article/social-media-and-the-2022-midterm-elections-anticipating-online-threats-to-democratic-legitimacy/.
  5. Erin Simpson and Adam Conner, “Results Not Found: Addressing Social Media’s Threat to Democratic Legitimacy and Public Safety After Election Day” (Washington: Center for American Progress, 2020), available at  https://www.americanprogress.org/article/results-not-found-addressing-social-medias-threat-democratic-legitimacy-public-safety-election-day/.
  6. Federal Election Commission, “How to Report: Election cycle and aggregation,” available at https://www.fec.gov/help-candidates-and-committees/filing-reports/election-cycle-aggregation/ (last accessed August 2023).
  7. U.S. Election Assistance Commission, “Chapter 13: Canvassing and Certifying an Election,” in “Election Management Guidelines” (Washington: 2010), available at https://www.eac.gov/sites/default/files/eac_assets/1/6/EMG_chapt_13_august_26_2010.pdf.
  8. Public Law 80-771 of 1948, 80th Cong., 2nd sess. (June 25, 1948), available at https://advance.lexis.com/r/documentprovider/-ssyk/attachment/data?attachmentid=V1,215,34443,62stat672,1&attachmenttype=PDF&attachmentname=View%20this%20document%20in%20PDF%20format%20(805%20KB)&origination=&sequencenumber=&ishotdoc=false&docTitle=U.%20S.%20Code%2C%20title%203.%2C%2062%20Stat.%20672&pdmfid=1000516&#page=.
  9. Presidential Transition Act of 1963, Public Law 88-277, 88th Cong., 1 sess. (March 7, 1964) available at https://www.govinfo.gov/content/pkg/COMPS-1612/pdf/COMPS-1612.pdf. 
  10. Integrity Institute, “Election integrity best practices” (Beltsville, MD: 2023), available at https://static1.squarespace.com/static/614cbb3258c5c87026497577/t/646e288938c652250ac8ce1f/1684940939541/%5BFinal%5D+Elections+Best+Practices+Guide+Part+1_2023-05-24.pdf.
  11. Thor Benson, “Brace Yourself for the 2024 Deepfake Election,” Wired, April 27, 2023, available at https://www.wired.com/story/chatgpt-generative-ai-deepfake-2024-us-presidential-election/.
  12. OpenAI, “Elections Lead, Global Affairs” available at https://www.linkedin.com/jobs/view/elections-lead-global-affairs-at-openai-3675320389/ (last accessed August 2023).
  13. Adam Conner, “The Needed Executive Actions to Address the Challenges of Artificial Intelligence” (Washington: Center for American Progress, 2023), available at https://www.americanprogress.org/article/the-needed-executive-actions-to-address-the-challenges-of-artificial-intelligence/; Adam Conner, “White House Must Take More Action To Address AI Concerns,” Center for American Progress, May 4, 2023, available at https://www.americanprogress.org/article/white-house-must-take-more-action-to-address-ai-concerns/; Megan Shahi and Adam Conner, “Priorities for a National AI Strategy” (Washington: Center for American Progress, 2023), available at https://www.americanprogress.org/article/priorities-for-a-national-ai-strategy/.  
  14. Katie Harbath, “Taking the Long View,” Anchor Change, August 17, 2023, available at https://anchorchange.substack.com/p/taking-the-long-view-8172023.
  15. Google, “Elections misinformation policies,” June 2, 2023, available at https://support.google.com/youtube/answer/10835034?hl=en.
  16. Meta, “Our approach to elections,” October 4, 2022, available at https://transparency.fb.com/features/approach-to-elections/.
  17. X, “Synthetic and manipulated media policy,” April 2023, available at https://help.twitter.com/en/rules-and-policies/manipulated-media; Meta, “Manipulated Media,” available at https://transparency.fb.com/policies/community-standards/manipulated-media/ (last accessed August 2023); Google, “Manipulated media,” available at https://support.google.com/publisherpolicies/answer/11185657?hl=en (last accessed August 2023).
  18. Dangerous Speech Project, “Counterspeech,” available at https://dangerousspeech.org/counterspeech/#:~:text=Counterspeech%20is%20any%20direct%20response,favorably%20influence%20discourse%20through%20counterspeech (last accessed September 2023).
  19. YouTube, “An update on our approach to US election misinformation.”
  20. Meta, “Preparing for the 2022 Restriction Period for Ads About Social Issues, Elections or Politics in the United States,” November 8, 2022, available at https://www.facebook.com/business/m/one-sheeters/us-ad-restriction-period-guidance-2022.
  21. Meta, “About ads about social issues, elections or politics,” available at https://www.facebook.com/business/help/167836590566506?id=288762101909005 (last accessed August 2023).
  22. Meta, “About social issues,” available at https://www.facebook.com/business/help/214754279118974?id=288762101909005 (last accessed August 2023).
  23. Tremau, “New Role Of Trusted Flaggers In The EU,” May 25, 2022, available at https://tremau.com/digital-services-act-trusted-flagger-organisations.
  24. Issac Stanley-Becker and Elizabeth Dwoskin, “Trump allies, largely unconstrained by Facebook’s rules against repeated falsehoods, cement pre-election dominance,” The Washington Post, November 1, 2020, available at https://www.washingtonpost.com/technology/2020/11/01/facebook-election-misinformation/.
  25. Joshua Stein, “The Life and Death of the Blue Check Mark,” Slate, April 29, 2023, available at https://slate.com/technology/2023/04/blue-check-funeral.html.
  26. X, “About Community Notes on Twitter,” available at https://help.twitter.com/en/using-twitter/community-notes (last accessed August 2023).
  27. Google, “What is the Civic Information API?”, available at https://developers.google.com/civic-information (last accessed August 2023).
  28. Oversight Board, “Ensuring respect for free expression, through independent judgment,” available at https://www.oversightboard.com/ (last accessed September 2023).
  29. Maria Diaz “Threads becomes fastest-growing app ever, with 100 million users in under a week,” ZDNET, July 10, 2023, available at https://www.zdnet.com/article/threads-hit-100-million-users-in-under-a-week-breaking-chatgpts-record/.
  30. Center for Technology and Society, “Six Things ADL is Watching Following Meta’s Threads Launch,” Anti-Defamation League, July 13, 2023, available at https://www.adl.org/resources/blog/six-things-adl-watching-following-metas-threads-launch.
  31. Meta, “Transparency reports,” available at https://transparency.fb.com/reports/ (last accessed August 2023); Google, “Transparency Report,” available at https://transparencyreport.google.com/?hl=en (last accessed August 2023); X, “ Transparency,” available at https://transparency.twitter.com/ (last accessed August 2023).
  32. Washington State Office of the Attorney General, “AG Ferguson seeks maximum $24.6M penalty against Facebook parent Meta,” Press release, October 13, 2022, available at https://www.atg.wa.gov/news/news-releases/ag-ferguson-seeks-maximum-246m-penalty-against-facebook-parent-meta.
  33. Stringer, “A comprehensive list of 2023 tech layoffs.”
  34. CBN News, “2020 Election: Evidence of Voter Fraud: SET IT STRAIGHT,” YouTube, November 12, 2020, available at https://youtu.be/nEsMOKNqzk8?si=pHJvqZ1UXO543eo2.
  35. Andrew Froehlich and Madelyn Bacon, “white hat hacker,” TechTarget, December 2021, available at https://www.techtarget.com/searchsecurity/definition/white-hat.
  36. Richard Lawler, “Meta reportedly plans to shut down CrowdTangle, its tool that tracks popular social media posts,” The Verge, June 23, 2022, available at https://www.theverge.com/2022/6/23/23180357/meta-crowdtangle-shut-down-facebook-misinformation-viral-news-tracker.
  37. Jon Porter, “Twitter announces new API pricing, posing a challenge for small developers,” The Verge, March 30, 2023, available at https://www.theverge.com/2023/3/30/23662832/twitter-api-tiers-free-bot-novelty-accounts-basic-enterprice-monthly-price.
  38. Meta, “Research partnership to understand Facebook and Instagram’s role in the U.S. 2020 election,” available at https://research.facebook.com/2020-election-research/ (last accessed August 2023).
  39. OpenAI, “Usage policies,” March 23, 2023, available at https://openai.com/policies/usage-policies.
  40. Lilian Weng, Vik Goel, and Andrea Vallone, “Using GPT-4 for content moderation,” OpenAI, August 15, 2023, available at https://openai.com/blog/using-gpt-4-for-content-moderation.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Author

Megan Shahi

Director, Technology Policy

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.