In the increasingly digitized landscape of modern politics, the combination of Artificial Intelligence (AI) and global election processes has guided in both innovation and concern.

As people navigate the intricate interplay between technology and democracy, one important issue becomes visible: how AI amplifies global election disinformation.

This intersection of politics and technology presents a complex tapestry where the algorithms designed to streamline our lives also have the potential to distort our democratic processes.

The World Economic Forum has identified misinformation as a primary societal concern for the upcoming two years, while well-known news companies alert that disinformation represents an unparalleled danger to democracies on a global scale.

AI’s role cannot be overstated in amplifying global election disinformation, with its algorithms serving as potent mechanisms for misleading narratives and deceptive campaigns.

Recent examples illustrate the spectrum of this phenomenon. During the 2020 U.S. presidential election, researchers found that AI-generated deepfake videos and automated bots flooded social media platforms, disseminating false information and sowing discord among voters.

According to a report by the Center for Strategic and International Studies (CSIS), these AI-driven disinformation campaigns not only targeted individual candidates but also sought to undermine the integrity of the electoral process itself.

This technology holds the competence to supply attackers with the tools to publish unprecedented levels of disinformation. Fake News is widespread. Studies show that people are increasingly inclined to accept news without substantial evidence.

Trick voters worldwide

Artificial Intelligence (AI) algorithms have the ability to analyze vast amounts of data, identify trends, and target specific demographics, making them powerful tools for those seeking to manipulate public opinion.

From targeted disinformation campaigns to sophisticated deepfake technology, AI is amplifying global election disinformation, tricking voters on an unprecedented scale.

AI's role in disinformation campaigns:

  • AI-powered bots and algorithms are increasingly being deployed to spread false narratives and manipulate public opinion.
  • According to a study by University of Southern California, AI-generated content accounted for 23% of all political tweets during the 2018 U.S. midterm elections.

Deepfakes and misleading content:

  • Deepfake technology, which uses AI to create realistic but fabricated audio and video content, poses a significant threat to electoral integrity.
  • Research from Deeptrace Labs indicates that the number of deepfake videos online has doubled in the past year, with political figures often being the targets.

Social media manipulation:

  • Social media platforms have become breeding grounds for AI-driven disinformation campaigns.
  • A report by the Atlantic Council's Digital Forensic Research Lab found that 30% of Twitter accounts discussing COVID-19 were likely bots, spreading misinformation about the pandemic and related political issues.

Targeting vulnerable populations:

  • AI algorithms are adept at identifying and targeting vulnerable populations with tailored disinformation campaigns.
  • A study by Data & Society revealed that minority communities are disproportionately targeted by AI-driven voter suppression tactics, including misinformation about polling locations and voter eligibility.

Implications for democracy:

  • The widespread dissemination of AI-generated disinformation undermines trust in democratic institutions and electoral processes.
  • A survey conducted by Pew Research Center found that 67% of Americans believe that the spread of false information online is a major problem for democracy.

AI’s role in escalating deception

As AI technologies continue to develop and increase, their role in escalating deception will become increasingly obvious, creating significant challenges to citizens' trust, information integrity, and democratic processes.

Manipulation and disinformation: AI algorithms can be used to spread misinformation and manipulate public opinion through targeted advertising, social media bots, and deepfake technology. This can undermine the integrity of electoral campaigns and distort voter perceptions.

Biased decision-making: AI systems may incorporate biases present in the data used to train them, leading to discriminatory outcomes in areas such as voter registration, districting, and candidate selection. Biased algorithms can perpetuate systemic inequalities and disenfranchise certain groups of voters.

Privacy concerns: AI-driven data analytics can invade individuals' privacy by harvesting and analyzing personal information from social media, internet browsing history, and other sources. This data can be used to create detailed voter profiles and target individuals with tailored political messaging without their consent.

Security risks: AI-powered technologies used in electoral systems, such as electronic voting machines and voter registration databases, are vulnerable to cyberattacks and hacking attempts. Security breaches can compromise the integrity of election results and erode public trust in the electoral process.

Algorithmic opacity: the complex nature of AI algorithms makes them difficult to understand and interpret, leading to a lack of transparency and accountability in decision-making processes. Citizens may feel disenfranchised if they cannot comprehend how AI systems influence electoral outcomes.

Social polarization: AI algorithms designed to maximize user engagement on social media platforms can inadvertently exacerbate political polarization by promoting content that reinforces individuals' existing beliefs and biases. This can contribute to the spread of echo chambers and undermine constructive political discourse.

Erosion of democratic norms: the proliferation of AI-driven disinformation campaigns and manipulation tactics can erode public trust in democratic institutions and processes. Skepticism and cynicism towards electoral outcomes may lead to apathy, disillusionment, and decreased voter turnout.

Defending the Vote

With 2024 elections around the world, the task of identifying misinformation (incorrect or misleading information spread inadvertently) and disinformation (information intentionally crafted to deceive) is to become more difficult, particularly as deepfake technology advances in sophistication.

“I think one change we’re going to see in 2024 is the rapid acceleration and quality of artificial intelligence tools. Someone can take an authentic video of any candidate and change what they've said, and mimic their voice, and make it look like their mouths are saying it.”, says Peter Adams, senior vice president of research and design with the News Literacy Project.

Adams further highlights the issue by noting that platforms like YouTube, Facebook, and X (formerly Twitter) are increasingly shirking their responsibility to curb the dissemination of misinformation. To navigate the challenge of discerning truth from falsehood, consider implementing these strategies.

1. Use fact-checking platforms

Several websites are dedicated to verifying the authenticity of statements, images, and videos, regardless of political affiliation. Platforms like by The Annenberg Public Policy Center, include renowned for its reliability, PolitiFact managed by the nonprofit Poynter Institute, and Snopes. The Fact Checker by The Washington Post evaluates information from both sides of the political spectrum.

2. Keep an eye out for AI cues

As AI technology continues to evolve, it's becoming increasingly adept at generating images that appear remarkably realistic. However, due to the current stage of development, there are often subtle hints that can reveal these images as artificial.

According to Adams, some of the more advanced AI image tools may exhibit characteristics reminiscent of cinematic or overly polished visuals. These images might appear slightly softened, akin to an airbrushed effect.

“Some of the most compelling AI image tools look a little cinematic, a little too polished, they tend to look a bit softer — with an almost airbrushed quality.”

3. Adopt a skeptical mindset

While AI is a prominent concern, traditional image manipulation techniques can be equally deceptive. For example, in 2020, an online video supposedly depicting ballot box stuffing in the United States was revealed to be from Russia.

"If something seems too perfect or too alarming, it likely warrants skepticism," advises Jevin West, the founding director of the University of Washington Center for an Informed Public. "It's essential to invest more effort into discerning reality from falsehoods."

4. Image verification

In a recent example from July, an image surfaced online purportedly showing Joe Biden donning a pink suit to commemorate the Barbie movie. However, it was revealed to be an AI-generated fabrication, as confirmed by RumorGuard.

To ascertain the authenticity of an image, utilize Google's reverse image search feature. Simply visit and paste the photo or link into the search bar.

The results can provide valuable insights into the image's origin and age. Additionally, platforms like also offer reverse image searching capabilities.

5. Robocalls alert

While AI-generated images and videos often capture significant attention, Claire Warlde, co-founder and co-director of the Information Futures Lab at Brown University, is focused on a growing concern: personalized robocalls.

These calls, which may appear to be from legitimate sources, can be customized to lead recipients by name and include personal details like residential addresses.

For example, you might receive a call or text message supposedly from an official entity, reassuring you by name that bringing identification to the polling place is unnecessary, despite the requirement.

Warlde emphasizes the necessity of treating these calls as skeptical as phishing emails and warns against unquestioningly accepting their claims.

6. Trusty sources

Check that a blogger or website keeps journalistic standards, indicating their commitment to verifying the information they present. While news companies may blunder at times, they typically correct mistakes and uphold rigorous fact-checking protocols.

"Consider the credibility and past record of accuracy and verification of a source," suggests Adams. "If you see a photo or video shared by multiple reputable news outlets, you can have faith in its authenticity. However, exercise caution if you observe it circulating predominantly within hyper-partisan online spheres and echo chambers."

7. Avoid relying solely on one source

Look out multiple sources, especially when the topic appears extraordinary, like a political candidate sporting a Speedo.

"I always engage in lateral reading, which involves asking, 'Who else is covering this?'" explains Wardle. "Visit Google News, conduct a search. If another credible source is reporting on the same issue, they might provide additional context." West adds that if your search fails to yield other articles on the same subject, it should raise concerns.

What’s next?

The fight against AI-amplified election disinformation requires a multifaceted approach that combines technological innovation, education, collaboration, and regulation.

We don't have to depend on government or tech companies to develop mental resilience. We can all educate ourselves to recognize misinformation by understanding the signs that accompany deceptive rhetoric.

Consider how polio, a once highly contagious disease, was eliminated through vaccination and herd immunity. Now, our goal is to establish herd immunity against the tactics used by disinformation peddlers and propagandists.

AI is playing with our democracy.