The purported threats of AI-driven falsehoods created an atmosphere of fear and uncertainty as the historic election year was fast approaching. With the prospect of more than half of the world’s population anticipated to participate in democratic elections across diverse nations like the U.S., U.K., India, Pakistan, and Bangladesh, the media was plastered with warnings about potential disruptions by AI-based deepfakes and fake audios, predicting an ‘AI armageddon’. Surprisingly, such dire predictions did not come true. AI-backed misinformation was found to be significantly less rampant and infiltrative than anticipated, contributing less than a percent to fact-checked content across platforms and only 1.35 percent in fact-checks.
As the election year concluded, an important question emerged: What, if any, was the impact of AI on swaying voters globally? In the U.S., the run-up to the Presidential elections witnessed an upsurge in AI-created images, particularly targeting Kamala Harris, the Democratic candidate. Numerous AI-manipulated visuals portrayed Harris with controversial figures or hinted at her controversial past. These served as a tool to boost existing partisan sentiment rather than fabricate outright misinformation.
Although the intention was not necessarily to deceive, even the purported ‘satirical’ or ‘dark humor’ posts reflected an inherent bias. These AI-created images were generally circulated by accounts notorious for frequently propagating misinformation. This suggested the primary goal was reinforcing pre-existing biases among committed voters rather than swaying their opinions.
AI-produced misinformation was not limited to biased imageries; fake celebrity endorsements also became a prominent strain in the lead up to the election. For instance, Taylor Swift infamously endorsed Kamala Harris, a fact believed by 53 percent of surveyed Americans as capable of benefitting Harris’s campaign.
Calculating the precise vote-impact of individual AI-manipulated media or narratives is arduous, yet polls during the race did not indicate any significant shifts or sudden declines in support for either candidate. This suggests that the influence of AI-based misinformation was rather limited. Deepfakes, traditionally a major cause of concern pertaining to AI, were less widespread than predicted.
The most circulated deepfake faced immediate debunking. Rumors concerning an ‘October surprise’ involving deepfakes were stirring within news circles, which ultimately never manifested. On the other hand, AI-driven misinformation was not just a domestic issue; traces of foreign interference were found.
An AI-modified video featuring a supposed former government aide was debunked, with origins traced back to a group connected to a contentious figure. Concurrently, U.S. authorities seized internet domains partaking in foreign-led influence campaigns, which relied heavily on AI-generated content and ads to undermine campaigns.
Several firms preemptively safeguarded against the spread of deepfakes, blocking efforts to utilize their AI generator for creating deepfakes aimed at various figures. Similarly, other organizations claimed to have disrupted campaigns attempting to leverage their models globally.
Europe braced itself for the European Parliament elections in June, with many wartime jitters about the so-called ‘AI armageddon’. But again, the anticipated danger from AI misinformation was botched. Following the election, the European Union’s AI Act came into effect, possibly motivating platforms to act swiftly against AI misinformation.
However, conventional forms such as circulating rumors or creating fake articles remained predominant in misinformation efforts within the United Kingdom. While there were verified instances of AI-crafted disinformation or deepfakes, they were not significant.
In two other regions, local fact-checkers reported the use of AI-based disinformation for negative campaigning. Their primary narratives revolved around deepfake videos falsely attributed to political representatives. The swift spread of these videos and the shortage of combating tools presented a formidable challenge.
‘Identifying manipulated audio required specific tools and expertise, which weren’t always readily available; this was true for deepfakes too’, one fact checker remarked. Furthermore, partisans often rebutted these corrections, accusing them of bias.
In Bangladesh, too, the absence of reliable technology presented problems. ‘A major challenge was handling leaked phone records’ claims related to political and administrative figures’, confessed a fact checker. ‘Corroborating these claims was impossible due to the lack of audio detection technology.’
This instills a red flag concerning the lingering threat of AI disinformation in nations without proper measures to shelter the average voter from manipulation. As regulations and educational interventions could boost the quality of information on social media, the issue fundamentally rests with the actors rather than the technology. AI merely elevates existing misconceptions instead of fabricating misinformation.
However, future advancements in technology could potentially alter this status quo. ‘There is a growing concern that this tech will enable malign parties to manipulate individuals more effectively by serving AI-created content tailored to incite their hopes and fears’, one expert voiced. Thus, focusing solely on AI’s possible adverse effects might be a myopic approach.
Instead, efforts should be targeted towards exploiting AI potential to enrich the quality and diversity of public discourse. This holistic view could welcome productive discussions and prevent the unnecessary demonization of AI technology.