News You Can Misuse Disinformation groups used AI to spread propaganda.

Published
Reading time
2 min read
Synthetic avatar created using the Synthesia demo

Political forces used a commercial AI service to generate deepfaked propaganda.

What’s new: Videos have appeared on social media that show AI-generated characters speaking against the United States or in favor of foreign governments, The New York Times reported. The clips feature synthetic avatars offered by the United Kingdom startup Synthesia.

Found footage: Researchers at Graphika, which tracks disinformation, discovered deepfaked videos posted on YouTube by accounts tied to a disinformation network.

  • Two videos show fake news anchors who deliver commentary. One accuses the U.S. government of failing to address gun violence in the country, and another promotes cooperation and closer ties between the U.S. and China. Both clips bear the logo of a fictional media outlet, Wolf News. Neither garnered more than several hundred views.
  • In January, a U.S. specialist in African geopolitics found videos in which synthetic characters who claim to be U.S. citizens voice support for Burkina Faso's military leader Ibrahim Traoré, who seized power in a coup last year.

Deepfake platform: Synthesia’s website provides 85 avatars, each based on a human actor, which customers can pose and script in any of 120 languages or accents. The company’s terms of service bar users from deploying its avatars for “political, sexual, personal, criminal and discriminatory content.” It employs a team of four to monitor violations of its terms and suspended Wolf News’ account after being alerted to the videos.

Fakery ascendent: The recent clips may represent an escalation beyond earlier incidents, which appear to have been one-offs that required custom development.

  • Shortly after the Russian invasion of Ukraine in early 2022, hackers posted a deepfaked video of Ukrainian president Volodymr Zelenskyy encouraging Ukrainian forces to surrender.
  • Both leading candidates in the 2022 South Korean presidential election deployed AI-generated likenesses of themselves answering questions from the public.
  • In 2019, a deepfaked video in which a Malaysian politician appeared to admit to a sex act fueled a scandal.

Why it matters: Experts have long feared that AI would enable a golden age of propaganda. Point-and-click deepfakery gives bad actors an unprecedented opportunity to launch deceptive media campaigns without hiring actors or engineers.

We’re thinking: Researchers at Georgetown University, Stanford, and OpenAI recently described several measures — including government restrictions, developer guidelines, and social media rules — to counter digital propaganda. The simplest may be to educate the public to recognize underhanded efforts to persuade.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox