Negative Campaigning: The Danger of Deepfakes for Politics and Business

Written by Daniel Heinzlmeier

30 November 2023


With artificial intelligence (AI), things are possible today that would have been unthinkable just a short time ago. For instance, image, video, and audio recordings can now be created within minutes that appear real at first glance. This opens up many new possibilities – unfortunately, not all are positive. The use of AI in negative campaigning has reached alarming levels, bringing it into the focus of corporate communication and PR.

Negative campaigning generally refers to political or advertising strategies aimed at damaging the reputation of an opponent or competitor. Previously, this involved spreading compromising information or accusations to influence public opinion. Now, with the use of deepfakes – AI-manipulated media – negative campaigning has gained a new dimension. This technology enables the creation of highly realistic and fake content depicting people in unexpected scenarios.


Current situation: AI and Negative Campaigning

A notable instance occurred in March this year when supposed images of Trump’s arrest circulated on social networks. These images, created by journalist Eliot Higgins using the AI imaging program Midjourney, went viral and were initially believed to be real by many users.

In Germany, the AfD (Alternative for Germany) has faced criticism for spreading AI-generated images on their social media channels. For example, a picture of a group of aggressive-looking young men of foreign origin with the caption “No to more refugees!” was published on Norbert Kleinwächter’s Instagram in March. The AfD described these as “symbolic images,” with Kleinwächter stating they were clearly “caricatures,” though the clarity of this remains questionable.


Deepfake Videos: Manipulation Risks for Politics and Elections

AI-generated images are one thing, but deepfake videos of well-known personalities could pose a greater threat, especially since many are unaware of the advanced state of this technology. Cost-effective audio deepfake services now allow for the creation of digital voice copies. One can simply choose an interview and use an AI video service to sync the lip movements with the new text.

German Foreign Minister Annalena Baerbock has repeatedly been a target for deepfakes. A fake video on TikTok showed her announcing a deposit bottle tax. Although the video and its description indicated it was fake, many users might have been deceived without this label. Initially intended as satire, such videos can quickly become tools for spreading disinformation.

The power of such videos is significant, especially considering the continuous improvement in technology. During the U.S. primary elections for the upcoming presidential race, the DeSantis camp produced a clip with fake images of Trump embracing epidemiologist Anthony Fauci, which did not sit well with Republicans. Trump’s team retaliated with a video of DeSantis in women’s clothing talking to Satan. This preview hints at what we might see in the 2025 federal elections in Germany.


Implications for Corporate Communication

Deepfakes can also significantly impact businesses. For example, on May 22, 2023, a Twitter account resembling local U.S. TV stations tweeted an image of an alleged Pentagon explosion, temporarily affecting stock indices. This illustrates how negative campaigning can quickly affect businesses. Corporate communication departments must prepare for such threats.

Potential scenarios include fake videos of company leaders making manipulated statements, damaging the company’s credibility. Such tactics could be used by former employees or competitors seeking an advantage.

Given the ease of creating deepfakes today, companies need to establish clear guidelines for handling AI-based crises. Measures should include:

  • Awareness and Training: Conducting training and awareness programs to recognize deepfakes.
  • Risk Assessment: Analysing specific risks and identifying vulnerable areas.
  • Establishing Guidelines and Procedures: Implementing procedures for verifying media authenticity.
  • Investing in Detection Technologies: Using AI technologies and software solutions to detect fake content.
  • Monitoring Online Platforms: Continuously monitoring social media for potential deepfake content.
  • Crisis Communication Plan: Developing a detailed crisis communication plan.
  • Collaboration with Experts: Cooperating with cybersecurity, forensic, and AI experts.
  • Regular Updates and Adjustments: Continually updating security protocols in response to technological advancements.


The emergence and spread of deepfakes pose a serious threat to politics, businesses, and society. However, alongside the rapid advancement of deepfakes, programs and approaches for detecting them are also making promising progress. Hopefully, these will help minimize the impact of manipulated media content and restore trust in digital information. Collaboration among experts in AI, cybersecurity, forensics, and media is crucial for people to fully harness the opportunities of the digital age.