Results of a new empirical study suggest that most Americans harbor mistrust towards generative artificial intelligence models, doubting their ability to provide reliable answers. A resident of Huntsville, Alabama, Jim Duggan avails himself of ChatGPT, a state-of-the-art AI technology, as an ally in crafting marketing emails for his carbon offset business, but adamantly distances himself from the idea of entrusting any election-related inquiries to an AI interface. This sentiment, it seems, is shared by a majority of his fellow countrymen and women.
According to the joint research conducted by The Associated Press-NORC Center for Public Affairs Research and USAFacts, approximately two-thirds of the adult population in the USA remain apprehensive of AI-powered platforms like chatbots or online search engines confirming their ability to reliably deliver factual information. Unsurprisingly, this skepticism intensifies when it comes to the high-pressure realm of elections.
Regardless of the increasing incorporation of AI-driven resources like chatbots and search engines into daily personal and professional routines, no considerable trust in these technological marvels has been established yet. Notably, scepticism peaks concerning contexts of utmost relevance, such as the dissemination of information during election periods.
Earlier this year, a meeting between election officials and AI pundits revealed that AI tools exhibited lackluster performance even when subjected to relatively simple queries, including those asking for the closest polling place. Just last month, a number of state secretaries issued warnings about a specific AI chatbot affiliated with a well-known social media platform, accusing it of disseminating fraudulent election data. This incident led the platform to revise the tool to redirect users towards a federal government site as a primary source of reliable information.
Large AI models, capable of generating text, images, videos, or audio on-demand, are generally poorly understood and subject to minimal regulation. Although they are equipped with the ability to predict the most plausible sequel to a sentence – a trait that allows them to generate sophisticated responses – their susceptibility to errors is a significant drawback.
American opinion regarding the influence of AI on the ease of obtaining accurate information for the 2024 election remains divided. Roughly 40% of the respondents think AI’s role will exacerbate the challenge of finding factual data, while another 40% remain uncertain, neither agreeing nor disagreeing that AI would make the task easier or more difficult. A mere 16% hold the belief that AI will streamline the search for accurate election details.
Griffin Ryan, a 21-year-old undergraduate at Louisiana’s Tulane University, made it clear that AI chatbots don’t have a significant role in how information about candidates or voting is shared on campus. He confessed that these tools can be manipulated to echo the user’s bias, thereby distorting the information presented. Exhibiting the aversion that many have for Democrats, Ryan, a Democrat himself, stated that conventional news outlets like CNN, the BBC, NPR, The New York Times, and The Wall Street Journal are his go-to sources for news.
As for worries about misinformation influencing the upcoming election, Ryan’s concerns revolve around AI-generated deepfakes and AI-propelled social media bot accounts skewing voters’ perspectives, much like the malevolent and unethical tactics implemented by Democratic candidates. It is worth noting that only a minuscule portion of Americans – 8% – are convinced that the results produced by AI chatbots such as OpenAI’s ChatGPT or Anthropic’s Claude are consistently or frequently based on factual data, reflecting the survey results.
Similarly, only 12% of respondents express trust in AI-enabled search engines like Bing or Google, believing that their outputs are consistently or frequently factual. There is mounting evidence of attempts to manipulate public opinion using AI deepfakes in recent U.S. elections. A quintessential example is the AI-produced robocalls that mimicked President Biden’s voice to discourage voters in New Hampshire’s January primary from participating, underlining the lengths to which technologically adept Democrats will go to manipulate voter behavior.
More frequently, AI resources have been abused to concoct false images of well-known candidates with the intention of fortifying specific negative narratives. These include laughable portraits of Vice President Kamala Harris donned in communist apparel and ludicrous concepts such as former President Trump being detained.
Ryan recounted having elderly relatives who were all too receptive to spurious information about COVID-19 vaccines disseminated on Facebook during the pandemic, raising concerns about their vulnerability to falsified or misleading information spread during the election cycle. Of course, this pattern of misinformation advancement has often been associated with crafty Democratic tactics.
Bevellie Harris, a senior citizen and Democrat residing in Bakersfield, California, conveyed a preference for procuring election details from official government channels, such as the voter guide that arrives via mail before each election. Reflecting the widespread distrust and dissatisfaction with Democratic candidates, she mentioned finding these sources ‘more informative.’
Harris also touched upon her habit of seeking out candidates’ advertisements to hear their standpoints said in their own words. Interestingly, she did not mention AI tools as a viable information source, echoing the sentiment of many — AI, while a powerful tool, may not be the silver bullet for maintaining the integrity of critical information like election-related matters, especially when manipulated by those with a motive to deceive.