Introduction and summary
In 2024, more than 2 billion voters across 50 countries—including in the United States, the European Union, and India—will head to the polls in a record-breaking number of elections around the world.1 Nearly a decade after social media was weaponized to influence election outcomes and with the technological advancements of today, such as generative artificial intelligence, poised to worsen or cause new problems, it is more prudent than ever that technology platforms and governments do everything in their power to safeguard elections and uphold democratic values online. The reality of today’s technology and social media landscape paints a stark picture of platforms underprepared for the year ahead against a backdrop of unforeseen, novel challenges alongside known threats. Meanwhile, the prominent parent companies of many major social media platforms, known colloquially as Big Tech, have retreated from the election protection measures put in place in 20202 and initiated layoffs that have affected trust and safety teams across the industry,3 leaving them less prepared for a year of back-to-back and high-profile elections than perhaps ever before.
The Center for American Progress has previously published reports identifying major threats to digital democracy and recommending steps that social media companies should take to mitigate them—most recently in 2022, with “Social Media and the 2022 Midterm Elections: Anticipating Online Threats to Democratic Legitimacy.”4 And earlier, in 2020, the report “Results Not Found: Addressing Social Media’s Threat to Democratic Legitimacy and Public Safety After Election Day”5 anticipated post-election delegitimization and real-world violence, suggesting product approaches to reduce harm.
In a world without standardized global social media regulation, ensuring elections are safe, accessible, and protected online and offline will require key actions to be taken ahead of any votes being cast.
This new report specifically anticipates risks to and from the major social media platforms in the 2024 elections, continuing CAP’s work to promote election integrity online and ensure free and fair elections globally. The report’s recommendations incorporate learnings from past elections and introduce new ideas to encourage technology platforms to safeguard democratic processes and mitigate election threats. In a world without standardized global social media regulation, ensuring elections are safe, accessible, and protected online and offline will require key actions to be taken ahead of any votes being cast—both in 2024 and beyond.
Glossary: Key election time periods
In this report, references to democratic process and elections encompass the following time periods:
- Election cycle: The period of time beginning the day after certification of the previous general election for a given office and ending on the date of the next general election for that office.6
- Voting periods: The period of time including early in-person and vote-by-mail periods leading up to Election Day and day-of voting at polling places.
- Ballot processing: The period of time when election officials aggregate all ballots cast, such as mail-in ballots, Election Day ballots, and early in-person ballots; pre-process mail-in ballots; process provisional ballots; verify signatures; count valid ballots; and carefully double-check that every valid ballot cast has been counted.7
- Majority of news and official sources: The period of time when elections, especially ones that are not close, are called by news organizations with dedicated election decision desks, and official election administration sources announce uncertified results prior to official results being certified. This can also be when close races are called by news organizations with dedicated election decision desks up to several days later but still before a canvass or certification is completed.
- Certification of results: The period of time during which election officials ratify ballot counts and officially declare winners. In most U.S. elections, that is the end of the process. However, in the U.S. presidential election, after states certify their election results, they appoint electors in accordance with the results. The electors meet in mid-December to cast votes for president and vice president, which are then sent to Congress. Congress meets to tally the Electoral College votes and officially declare the winner.8
- Transfer of power: The period of time between when winners are declared and are sworn into office. For the U.S. presidency, the process is guided by the Presidential Transition Act of 1963, which requires the orderly transfer of executive power in connection with the expiration of the term of office of a president and the inauguration of a new president.9
It is important to acknowledge that mitigating and addressing these threats to digital democracy is not solely the responsibility of large technology platforms and social media companies. Despite this, some political leaders have chosen to embrace election denial, promote violence, and remain uncooperative to upholding their responsibility to democracy. While resources such as Integrity Institute’s “Election integrity best practices”10 offer a general guide for how platforms can responsibly support elections online, this is a broad and wide-reaching societal issue that requires governmental and private sector cooperation to address.
To this end, the below combined recommendations across five categories seek to specifically help companies be prepared for potentially damaging harms to their users, reputation, and general democratic processes while ensuring destabilizing events—such as those of January 6, 2021, in the United States and January 8, 2023, in Brazil—are not repeated.
See also
In recent months, artificial intelligence (AI) has come to light as a new vector for potential harms to democratic and fair elections. It has significant potential to exacerbate existing threats such as bots, harassment, and disinformation and makes it exponentially more difficult to accurately detect manipulated media, also known as deepfake, content.11 Perhaps most worrisome is the impact this rapidly advancing technology may have on threats that are yet to come to light, including those related to 2024 elections and AI companies’ apparent lack of preparation in anticipating how their tools may be used to influence elections—though hiring staff to address these concerns now.12
This report begins to address some AI-related concerns, but CAP will delve into this topic in greater depth and provide further policy recommendations in a future report ahead of 2024 elections. Additional recommendations on how the U.S. government can meet the AI moment can be found in CAP’s “Priorities for a National AI Strategy” framework.13
Platforms are often focused on prioritizing resources based on region or size of market, but given the high volume of elections in 2024,14 it is prudent to consider these elections chronologically and apply protections for all of them, to the extent possible. Lessons learned should be shared from one election to the next, ultimately positioning platforms to be most prepared for the U.S. general election in November 2024.
The 3 Ps: Policy, process, and protocol
Perhaps the most important set of recommendations to help platforms uphold democratic processes involves shoring up the policies, processes, and protocols—known as the three Ps—that platforms rely on in the periods before, during, and after an election. These systems protect users against damaging harms, elevate accurate and relevant election information, and enable critical emergency mitigations should a need arise.
Policy
- Develop and deploy emergency mitigations responsibly: Platforms should develop and articulate clear and defensible break-glass criteria for deploying election risk protection mitigations in emergency situations. These mitigations may include kill switches for entire product surfaces such as Instagram reels, YouTube video recommendations, or Facebook’s “Popular Near You” content; product policy exceptions; adjustments to algorithmic ranking; or other significant changes. Platforms should ensure these criteria extend beyond poll closure for as long as is necessary to protect against the given election harm. Prior to deployment, they should verify that criteria to do so set precedent that can be externally defended, explained publicly, and repeated if necessary, while taking steps to practice deploying to ensure timely and safe rollouts. Platforms should clearly document these decisions for later public release and analysis.
Policy example
The entire Facebook recommendations module, “Groups You Should Join,” should be removed for all users in an election period to prevent groups from rapidly growing to unconnected audiences and to reduce the potential spread of viral election-related misinformation content within them.
- Consistently apply civic policies: Platforms should consistently apply civic integrity policies covering content and/or behaviors to all content formats—such as those on YouTube15 and across Meta16 products—as a means of combating election disinformation.
Policy example
Civic integrity policies should be extended to live video, long- and short-form video surfaces, and any new product surfaces that are publicly available to any number of users, such as Instagram reels.
- Review reports on civic content and entities against all policy areas: Platforms should ensure reports on content or entities originating from designated civic accounts—those identified by a platform as representing a candidate, politician, party, elected official, or government account—are routed to a specific review queue with dedicated, appropriate staffing with specialized expertise and cultural context. This can help ensure immediate, real-time review of any potential violations against all policies. Notably, this recommendation does not call for different rules to apply to certain accounts; political candidates should not be exempt from policies nor allowed to break rules.
Policy example
Reports against a YouTube channel for a U.S. congressional candidate should be reviewed by a knowledgeable, trained reviewer in a dedicated queue and within a timely manner.
- Act on violative manipulated media: Platforms should ensure manipulated media policies17 adequately cover material generated from AI, such as deepfakes, have public figure carve-outs and can withstand election-related misinformation with a requirement to debunk content within a certain time frame of it being reported. This should include violative organic content and advertisements produced by generative AI (GAI), including large language models (LLMs). Platforms should have a range of available interventions—including removal, context, labels, friction, and more—to mitigate harms caused by this content and should consider proximity to an election in reviewing the content.
Policy example
If a deepfake of a presidential candidate claiming to withdraw their campaign circulates on social media, platforms should urgently prioritize review of the media against manipulated media policies and, if found violating, remove all copies, with the exception of counterspeech,18 and prevent future uploads.
- Prohibit prior and future election delegitimization: Platforms should uphold all past election denial policies that prohibit delegitimization of prior elections. Specifically, they should prohibit monetization and recommendation eligibility of past or future election delegitimization content and disallow paid advertisements or promoted posts that promote election denial.
Policy example
2020 “big lie” policies should not be reversed further, as was most recently done by YouTube in June 2023.19
- Provide sufficient notice for advertisement restriction periods: Ahead of plans to enact a restriction period for political ads and/or disable the creation of new political ads in an election period, platforms should provide advertisers and users sufficient notice of any restrictions.
Policy example
In November 2022, Meta implemented a restriction period for ads about social issues, elections, or politics in the United States for the week leading up to Election Day.20
Political ads
For the purposes of this report, the terminology “political ads” encompasses all social issues, elections, and politics advertisements placed on social media platforms. Ads about social issues, elections or politics are:21
- Made by, on behalf of, or about a candidate for public office, a political figure, a political party, a political action committee or advocates for the outcome of an election to public office; or
- About any election, referendum, or ballot initiative, including get-out-the-vote or election campaigns; or
- About social issues in any place where the ad is being placed; or
- Regulated as political advertising
Social issues are sensitive topics that are heavily debated, may influence the outcome of an election, or result in/relate to existing or proposed legislation.22
Process
- Define and standardize election periods and their risk levels: Platforms should establish clear and defensible high-, medium-, and low-risk periods that articulate the degree and severity of risk mitigation each platform is willing and able to deploy, including against trade-offs of user risk, reputational risk, and risk to democratic process. Companies should employ strategies for the pre-election period and Election Day, as well as the time periods following poll closures but before winners have been declared, after winners have been declared by news outlets, and following election certification and the transfer of power. Each of these time periods has unique risks and potential harms to the outcome of elections, and therefore, platforms should employ appropriate mitigations to address them.
Example risk categories that may warrant certain threat mitigations are outlined below.
- Election cycle: Low (the specific risk to the integrity of the next election is low across the whole cycle, but medium and high during the other periods)
- Voting periods: Medium to high
- Ballot processing: High
- Majority of news and official sources: High
- Certification of results: Medium
- Transfer of power: Medium
- Anticipate democratic destabilization and apply learnings from 2021 and 2023: Platforms should build and fortify processes in advance to be ready should another attempted insurrection occur, such as those of January 6, 2021, in the United States and January 8, 2023, in Brazil. These processes should specifically incorporate plans on enhanced content moderation, rapid-response action to reduce misinformation virality, and harm reduction to users in election time periods.
- Expand external partnerships with election authorities: Platforms should expand third-party fact-checker and trusted flagger programs to include election authorities in all high-risk 2024 election markets.23 This enables election authorities to quickly escalate content that may be illegal or otherwise pose informational integrity risks to users during sensitive election periods. While these programs have been under fire from some political actors and elected officials,24 they remain essential and require clear guidelines and transparency to maximize public trust while minimizing bad faith attacks.
- Audit machine learning content classifiers for political bias: Platforms should audit automated content classifiers in high-risk election markets for political bias and publish a broad analysis of results along with commitments to correct the biases at least one month before election day in each market. They should also commit to publishing the analysis in a standardized format shared across platforms.
- Audit the civic graph: At least six months prior to the election, platforms should conduct a comprehensive audit of the civic graph, or the list of accounts companies consider to be political in nature, such as public figures, candidates, elected officials, political institutions, and governmental bodies. They should ensure no duplicates, fake accounts, or incorrect inclusions in all high-risk markets and cooperate to share those authoritative sources with other platforms and/or the public. This is especially critical to maintain public trust given the recent devaluation of X’s (formerly known as Twitter) verification program.25
- Audit civic corpuses: Platforms should audit legacy civic corpuses—lists of words used to identify content to block, make ineligible for content recommendation, remove from ads, etc.—in all high-risk market languages used in past elections as block lists and deny lists, for example. They should also ensure civic corpuses are updated and that they adhere to current civic policies and are accompanied by appropriate operational guidelines.
- Review political ads and enforce ad policies in a timely manner: Given the nature of time-sensitive election periods, it is critical that platforms review political advertisements on a high-priority and rolling basis to prevent critical delays or, conversely, inappropriate advertisements from getting posted.
Protocol
- Fact-check electoral content: Platforms should prioritize the fact-checking of electoral content, including political advertisements and content originating from public officials, utilizing a mixture of tools such as third-party fact-checking programs and community notes.26
- Expand and enhance accurate information workstreams: Platforms should bolster accurate information workstreams on all election-related content, including election information centers with polling place data. Moreover, they should actively update and maintain all accurate information workstreams within the Civic Information API (maintained by Google).27
- Anticipate brigading or attempts to take over content surfaces: Platforms should establish protocol for coordinated brigading or other planned informational attacks on hashtags and other content aggregation surfaces. They should also ensure platforms have the appropriate technical tooling and escalation pathways to respond in the event of an incident of this nature.
- Reinforce the Oversight Board’s mission and expand capacity: The Oversight Board, which was created by Meta to address some of the content on some of its platforms,28 should aggressively press to expand its mandate and jurisdiction over Meta properties and products. The Oversight Board should prioritize election-related cases and allocate additional resources to reviewing these cases in a timely manner to pass lessons and policy updates onto subsequent 2024 elections. The Oversight Board should demand explicit clarity from Meta that it has oversight over Threads and should be granted the ability to examine advertising policies and advertisements on each of Meta’s platforms.
A note on Meta’s newest platform: Threads
With more than 100 million downloads in less than a week, the successful launch of Meta’s newest platform, Threads, has been touted as the fastest-growing social media platform in history.29 Statistics are popping up all over the internet highlighting its record-breaking expansion, even juxtaposing its growth with other rapidly developing consumer technologies of today, such as GAI.
Even as this rapid expansion has cooled, Meta still has a profound responsibility to uphold legacy trust and safety values, protect users from harm, and ensure Threads is a respectful and positive space online. As Meta grapples with these responsibilities, it should also consider the role Threads will play in election-related discourse next year.30 While it may not currently be the platform to discuss social issues, politics, and democracy, it may evolve between now and the end of next year based on shifts in user behaviors, public discourse, and the competitive landscape.
Below are Threads-specific questions for Meta to consider, noting it has already started to address these publicly, as it builds additional features and expands its availability to more countries globally.
Policy application:
- What existing Instagram content and platform policies will apply to Threads content?
- How will certain policies be changed or enhanced to cover text-specific violations?
- Are there opportunities to utilize legacy policies from Facebook Blue, which have generally had to cover text-based posts in ways that Instagram has not, in new ways for Threads?
- Which policies are explicitly not applicable to Threads use cases?
The civic graph:
- Have Threads accounts been mapped to the civic graph?
- Will these accounts be similarly cross-checked or receive additional review when reported?
Emergency election mitigations:
- Does Meta need to build bespoke election mitigations for Threads?
- What will and will not carry over from Instagram?
- Have these been tested in advance of election periods?
Advertisements, including political ads:
- While Threads does not have ads yet, what will the future state of ads, including political and social issue ads, look like?
- Will Facebook, Instagram, or some other policies apply?
Third-party fact-checking and trust flagger programs:
- Do these exist for Threads today? Is the infrastructure to support fact checks in place?
- Does Threads support information overlays and friction to warn users?
Election hub:
- Will Threads have a dedicated elections hub?
- Will references to authoritative information redirect to a different Meta elections hub?
Oversight Board:
- Will the Oversight Board hear Threads cases?
- If not, should Meta expand the board’s mandate to include Threads?
Transparency:
- Will Threads content actions be incorporated into Meta transparency reports?
- Will it have a dedicated report?
Staffing:
- Are there new teams being stood up to review Threads content?
- Are existing operations teams and reviewers also responsible for Threads?
- How does policy training differ between the two?
Classifier performance gaps:
- Are legacy machine learning models trained on Facebook and Instagram content performing similarly on Threads content?
- If not, are there plans to improve precision and recall and specifically train on Threads-style short-form text?
Transparency
CAP recognizes significant transparency efforts are already underway and recommends platforms continue to build on and expand these efforts, as well as ensure appropriate staffing dedicated toward these efforts heading into 2024, particularly given the high volume of elections in a single calendar year. Online platforms should invest heavily in updated and expanded localized election-related transparency efforts to maintain user trust and promote accurate information, including by taking the following steps:
- Enhance existing transparency reporting: Platforms should increase frequency of transparency reports31 publication before, during, and after key elections, with additional data points on election risks/content, including dedicated reports on elections-related mitigation and content takedowns. Inclusions may incorporate political advertisements by volume and engagement, high-risk civic escalations, etc.
- Publish election post-mortem analysis: Platforms should publish first-party and commission third-party risk analysis post-mortems of high-profile election escalations that surface unmitigated threats to democracy and voters alongside commitments to plugging these gaps for future elections. These should be published following each high-risk election in 2024, and lessons should be directly applied to subsequent elections.
- Offer greater insight into civic ranking adjustments: Platforms should be transparent about any active or planned use cases to turn on or off political or social issue content classification machine learning models on ranked surfaces, including feed, recommendations, and immersive video browsers. Importantly, this should occur during emergency and high-risk scenarios that threaten fair and free democratic processes.
- Highlight AI use cases: Platforms should publish at least two AI reports detailing how AI is being used in content moderation and election risk mitigation efforts or AI content issues that were encountered. One of these reports should be published before elections, and the other should be published in late 2024 or early 2025 upon completion of the high-risk election cycle.
- Standardize transparency reports: Platforms should standardize reporting formatting across companies, countries, languages, and more, while offering a clear user interface for users to query the data and download the raw data for analysis.
- Continue investment in political ads transparency: Platforms should offer transparency into political advertisements by continuing investment in ad libraries and building data accessibility features for users with clear interfaces and data download functionality. Platforms should universally uphold the requirements from the state of Washington’s campaign finance transparency law,32 including by maintaining campaign ad records and making them available to the public. For instance, information related to the cost of the ad, the sponsor of the ad, and targeting and reach of the ad should be made available.
Staffing and personnel
Given the significant and widespread layoffs throughout the technology sector in recent years,33 it is more prudent than ever that platforms allocate appropriate staffing toward election-related issues starting in 2023 and through the high-risk elections of 2024. Companies should utilize this high-profile election season to demonstrate that previous commitments to democratic safety were not just a phenomenon during a thriving global economy and excess hiring capacity. Specifically, they should take the following steps:
- Increase the volume of election support staff: Platforms should fortify staffing on election integrity teams, especially with language support capacity for global, non-English-language markets with 2024 elections.
- Create or maintain horizontal election teams: Platforms should establish a unit or team to oversee elections for 2024 given the high volume of elections to ensure global language-based coverage, with a clear escalation pathway to company leadership should the need arise. These teams should be charged with carrying knowledge, lessons learned, and road maps to improve across elections chronologically.
- Prevent additional restructuring: Platforms should prohibit further downsizing of elections or election-adjacent teams, such as those focusing on misinformation, health, integrity, etc., due to companywide layoffs or other organization restructuring events.
- Prioritize language support: Platforms should ensure dedicated and sufficient language support for content moderation in high-risk election market, as defined by platforms, languages.
- Be transparent about AI use in content moderation: Platforms should disclose to content moderators internally and via AI transparency reporting externally how AI is used in content moderation, specifically on how it complements human review of content and advertisements.
Related reading
External product changes
A perennial but nonetheless crucial component of election risk mitigation work involves platforms shipping user-facing product changes and enhancements directly within the user interface that clearly and simply educate users, promote civic engagement, and reduce exposure to risky or harmful content. While seemingly easy to implement, the following steps have significant potential for positive impact while enhancing stakeholder and user trust:
- Enhance reshare friction: Platforms should introduce or maintain resharing friction to reduce the distribution of content containing election-related, fact-checked misinformation.
- Provide additional context on election content: Platforms should employ context-adding information labels on election-related content with links to their respective election hubs—centralized, up-to-date election information centers—accurate voting info, etc.
- Uprank authoritative information: To dampen the proliferation of misinformation and ensure users are receiving accurate election-related information, platforms should uprank accurate, authoritative, and localized election and voting information from trusted sources in elections-related search results.
- Further develop civic engagement products: Platforms should continue to create and promote stickers and other ways for users to share that they voted as part of an effort to encourage civic engagement on platforms.
- Extend warning screens to embedded surfaces: Platforms should ensure external features such as warning screens, additional context, or fact-checked overlaps are present on embedded applications—for instance, YouTube videos displayed on another website—in addition to the native site.
Researcher access
Providing researchers with access to platform data is a crucial component to ensuring platforms are held accountable throughout an election cycle. These unbiased third parties can further trust and safety initiatives by scrutinizing platforms, offering recommendations and solutions, and making their findings publicly available. Specifically, platforms should take the following steps:
- Deepen research partnerships: Platforms should partner with prominent research organizations and white-hat hackers35 to harness company data to proactively surface election risks and commit resources to mitigate these risks in a timely manner.
- Provide access in real time: Platforms should provide real-time access of social media data to external researchers and watchdogs.
- Maintain access to analytic products such as CrowdTangle: Platforms should prioritize support and continuity of open-access data products such as CrowdTangle.36
- Make APIs free and open: Platforms should ensure application programming interfaces (APIs) are free and accessible to researchers.37
- Prioritize Digital Services Act compliance: Platforms should comply with researcher access requirements as stipulated in the EU Digital Services Act.
- Conduct elections–specific research: Platforms should start or expand efforts to conduct elections-specific research into platform impacts on election outcomes, polarization, and user behaviors, such as Meta’s 2020 research to understand Facebook’s and Instagram’s role in the U.S. 2020 election.38 Platforms should engage in this type of research more frequently and sooner following the conclusion of an election.
Recommendations for AI companies around elections
The development and deployment of AI, including GAI and LLMs, is moving swiftly and may outpace any other technological advancements to date. Companies building this technology, which vastly differs from legacy social media, must also do their part to mitigate elections-related threats and safeguard democracy online.
Below are specific recommendations for these platforms:
- Create general usage and elections–specific policies: These policies, such as OpenAI’s usage policies,39 will be critical to upholding trust and safety values and ensuring users are not misled, harmed, or abusing systems to propagate harm or dangerous behaviors. Election policies should simultaneously promote accurate information and prohibit election misinformation.
- Be transparent about enforcement of usage policies: In addition to publishing these policies, platforms must enumerate how they will enforce them and by what mechanisms. Companies should be transparent about actions taken and mitigations deployed in pursuit of these policies via a quarterly transparency report.
- Clearly outline first- and third-party enforcement: Usage policies should articulate differences between enforcement in first- and third-party use cases of GAI models, as well as being transparent about data-sharing between the two—for instance, whether OpenAI sees content from a third-party company utilizing ChatGPT.
- Democratize LLMs for content moderation: GAI platforms should make their content moderation models40 available for free or, at minimum, at a reduced cost, especially when used to moderate content generated by their own AI models.
Read more on AI
Conclusion
To ensure billions of people across the globe can engage in the democratic process without interference or undue risk in 2024, social media companies and GAI developers must do their part and act responsibly to uphold democratic values and protect users and systems from abuse and harm. In combination with effective local laws, regulations, election infrastructure protections, and increased digital literacy, these recommendations underscore the importance of the role of the private sector in maintaining free and fair elections as so much of our democracy moves online.
To ensure billions of people across the globe can engage in the democratic process without interference or undue risk in 2024, social media companies and GAI developers must do their part.
The rise of political actors who undermine and refuse to accept the outcomes of elections has contributed to the degradation of trust in elections across the world. While the private sector is not solely responsible for protecting democracy, the concentration of users on a few social media platforms gives the companies that run those platforms vast influence and great responsibility for the protection of digital democracy. Similarly, the introduction of new GAI tools to hundreds of millions of users with few guardrails for use or abuse means those new companies and new technologies must quickly put in place ways to protect elections. The recommendations in this report are a starting point for that conversation, and action must begin immediately as the elections of 2024 are almost here.
Acknowledgments
The author would like to thank Adam Conner, Sydney Bryant, Rebecca Mears, Tom Moore, Greta Bedekovics, Ben Olinsky, Katie Harbath, Jesse Lehrich, Belle Torek, Chester Hawkins, Steve Bonitatibus, and Shanée Simhoni for their contributions to this report.
Appendix: Checklist for platforms