In November, millions of Americans will head to the polls for elections in the US. While each election is unique, Meta has developed a comprehensive approach to elections on our platforms: one that allows free expression, helps support participation in the civic process, and provides industry-leading transparency for political and social issue ads on our platforms.
As we prepare for the 2026 cycle, we are setting out how the policies and safeguards we have established over time will apply on our platforms.
Industry-Leading Transparency Around Political Ads
Advertisers who run ads about elections, politics, or social issues are required to complete an authorization process and include a “paid for by” disclaimer. These ads are then stored in our publicly available Ad Library for seven years. Currently, there are more than 18 million US entries in our Ad Library.
Restriction Period: As in previous election cycles, we will block new political, electoral, and social issue ads during the final week of the US election campaign. Ads that have recorded at least one impression prior to this restriction period will be allowed to run during this time. Our rationale for this remains the same as it has been for years: in the final days of an election, we recognize that there may not be enough time to contest new claims made in ads.
Political Ads Created or Edited Using AI: Advertisers are required to disclose when they use AI to create or alter ads about social issues, elections, or politics in certain cases. When an advertiser discloses this to us, we will add information on the ad and in our Ad Library. Our approach to labeling ads created or edited using AI continues to evolve to make it easier for people to recognize ads that may have been edited or generated from AI.
Identifying AI-Generated Content Beyond Political Ads
We use a combination of industry standards and technology (such as C2PA) to help identify content generated or edited using AI outside of ads on our platforms, and label it to promote transparency. We display an “AI info” label for content we detect was generated by an AI tool, and share whether the content is labeled because of industry-shared signals or because someone self-disclosed. We require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so.
Continuing to Connect People With Information From State and Local Officials
Voting Information and Election Day Reminders: As we have for years, we’re showing top of feed notifications on Facebook and Instagram to connect people with local and state voting information, including during the primaries. On Facebook, we’re making these notifications easier to understand by showing them in both a person’s selected app language and an additional language if they engage with content in another language. If someone on Facebook or Instagram has moved or we incorrectly detect their location, they can select Change State to be directed to the right government information.
Search Engine Results: When people search for election-related terms on Facebook, we surface results to off-platform state government websites for more information.
Voting Alerts: On Facebook, we continue to work with state and local election officials to send timely Voting Alerts about registering and voting to people in their communities. State and local election officials have sent more than 1 billion notifications via Voting Alerts on Facebook.
Instagram Stickers: On Instagram, we’re continuing to feature and elevate stickers on Stories that direct people to official voting information ahead of registration deadlines and Election Day.
Empowering the Community to Add More Context
Last year, we introduced Community Notes so that people can add context to posts on Facebook, Instagram, and Threads that are potentially misleading or confusing. Unlike traditional third-party fact-checking, Community Notes are written and rated by contributors, not by Meta or a small group of fact checkers. Notes can be submitted on most public content including posts by Meta, our executives, public figures, and politicans.
Contributors choose what content they feel may benefit from additional context. A note will only be published if it reaches “consensus,” meaning there is agreement among contributors who usually disagree with each other. This helps reduce bias and improve the overall quality of notes that end up being published. Anyone in the US can sign up to be a contributor as long as they’re over the age of 18, have an account that’s more than six months old and in good standing, and either have a verified phone number or be enrolled in two-factor authentication.
Combating Scams
We are focused on protecting people from scams — including those that misuse images of politicians — and we continually review and update our approach. It’s against our policies to feature ads on our platforms that use the images of public figures to scam or defraud people. We continue to put substantial resources toward tackling these kinds of fraudulent ads and are constantly improving our enforcement against them, including suspending and deleting accounts, Pages, and ads that violate our policies.
Scammers are opportunistic and can use elections to lure people into engaging with content under the pretense of political campaigns. To help educate people on how to spot and avoid these threats, we have developed resources in our Scam Prevention Hub. We are also using facial recognition technology to detect and prevent “celeb-bait” ads that use images of public figures to defraud people. We also remove impostor accounts that people report are impersonating them or someone else.
Operational Readiness
We have spent more than $30 billion in the areas of safety and security over the last decade, including to protect elections. On Facebook, Instagram, and Threads, we enforce our policies against voter interference, electoral violence and inaccurate information about when, where, and how to vote in an election. As always, we remove content that violates our policies when we become aware of it. We will also remove content that can potentially lead to physical harm or interfere with elections.
We’re also continuing to expose and disrupt foreign influence operations, including those targeting elections, and have removed 200 networks of Coordinated Inauthentic Behavior since 2017. Our Election Operations Center, which brings together subject matter experts from across Meta — including from our threat intelligence, data science, engineering, research, operations and legal teams — for real-time monitoring so that we can address potential abuse flowing across our network. As we have since 2018, we also will do this for elections around the globe, including presidential elections in Brazil.
For more information about how we prepare for elections, see our fact sheet here.