Deepfakes barely affected the 2024 election because they weren’t very good, study finds
It seems that although the Internet is increasingly drowning in false imageswe can at least take some stock of humanity’s ability to smell BS when it matters. A lot of recent research shows that AI-generated disinformation hasn’t had any significant impact on this year’s global elections because it’s still not very good.
Over the years, there have been many concerns that increasingly realistic but synthetic content could manipulate audiences in harmful ways. The rise of generative AI has reignited these fears, as the technology makes it much easier for anyone to produce fake visual and audio media that look real. Back in August, a political consultant used AI to spoof President Biden’s vote for a robocall telling New Hampshire voters to stay home during the state’s Democratic primary.
Tools like ElevenLabs make it possible to send a short sound of someone speaking and then dub their voice to say whatever the user wants. Although many commercial AI tools include guardrails to prevent this use, open source models are available.
Despite this progress, Financial Times in a new story looked back on the year and found that worldwide very little synthetic political content went viral.
It cites a report from the Alan Turing Institute, which found that only 27 pieces of AI-generated content went viral during the European elections this summer. The report concluded that there was no evidence that the election was influenced by AI disinformation because “most exposure was concentrated among a minority of politically persuasive users already attuned to the ideological narratives embedded in such content.” In other words, among the few who saw the content (before it was possibly flagged) and were primed to believe it, it reinforced those beliefs about a candidate, even if those exposed to it knew that it itself content is generated by AI. He cited the example of AI-generated images showing Kamala Harris addressing a rally standing in front of Soviet flags.
In the US, the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% were done with the help of AI. On X, mentions of “deepfake” or “AI generated” in the community notes are usually only mentioned with the release of new image generation models, not during elections.
Interestingly, social media users seem to be more likely to misidentify real images as AI-generated than the other way around, but overall users showed a healthy dose of skepticism.
If the findings are accurate, that would make a lot of sense. AI images are everywhere these days, but AI-generated images still have an off-putting quality, showing telltale signs of being fake. The arm may be unusually long or the face may not be properly reflected on a mirror surface; there are many small signs that will give away that the image is synthetic.
AI supporters shouldn’t necessarily be happy about this news. This means that the generated images should still work. Anyone who has unsubscribed OpenAI’s Sora model knows that the video it produces just isn’t very good – it looks almost like something created by a video game graphics engine (speculation is that he was trained on video games), who obviously doesn’t understand properties like physics.
All that being said, there are still concerns. The Alan Turing Institute Report I did after all, conclude that beliefs can be reinforced by a realistic deepfake containing disinformation, even if the audience knows the media is not real; confusion about whether a piece of media is genuine damages trust in online sources; and AI images are already accustomed target female politicians with pornographic deepfakeswhich can harm their psychological and professional reputation as it reinforces sexist beliefs.
The technology will certainly continue to improve, so it’s something to keep an eye on.