A U.S. presidential candidate has capitalized on the artificial intelligence momentum, offering access to an AI chatbot that can address policy questions on his behalf. However, the risks AI poses to both the candidate and 2024 election are vast.
Should the U.S. regulate AI? What about regulating big tech? What should the U.S. approach to climate policy entail? These are examples of questions GOP presidential candidate Asa Hutchinson's AI chatbot can address through a natural language interface.
Hutchinson, former governor of Arkansas, unveiled the tool in September before the second Republican presidential candidate debate. He failed to qualify for the debate after he said he fell short of the polling requirement necessary for participation. Since he wouldn't be able to answer questions onstage, he asked his team to build the Ask Asa AI chatbot so the public could ask questions.
The Ask Asa tool is preprogrammed with Hutchinson's past remarks, interviews and speeches; generates responses to policy questions; and provides information on position stances. The website does not provide information as to what powers the interface. It includes a disclaimer that the tool is "meant for informational and educational purposes only" and that responses aren't attributable verbatim to Hutchinson. His campaign team did not respond to a request for comment.
Hutchinson's use of artificial intelligence comes amid a growing global debate about AI policy and how governments should regulate the technology to protect consumers from harm. Large language models in particular, such as those created by generative AI model developer OpenAI, consume vast amounts of consumer data and energy to provide its outputs.
"By learning more about AI and utilizing it in our campaign, we are ensuring that my vision for America can be accessed by as many Americans as possible," Hutchinson wrote in a press release announcing the Ask Asa AI chatbot.
Hutchinson is the only presidential candidate to unveil such a tool. While it does provide a glimpse into his policy stances, it's a rare positive example of AI use during a campaign year that will be threatened by disinformation generated by AI, said Darrell West, a senior fellow at the Brookings Institution Center for Technology Innovation.
Hutchinson's AI use gets mixed reviews
West said he's impressed with the thoroughness of the Ask Asa chatbot's responses. Additionally, West said Hutchinson's experience with the AI chatbot and understanding the positive aspects of AI will be important from a policy standpoint, particularly as AI legislation is introduced in Congress.
"This is a very constructive use of AI," he said. "There are many problematic applications that are taking place in the campaign, but this is one that tries to put accurate information out to the general public. I applaud the governor for coming up with this tool."
David Karpf, associate professor at the George Washington University School of Media and Public Affairs, however, said he is not impressed with Hutchinson's AI chatbot. Karpf called Ask Asa a novelty attempting to garner public interest after Hutchinson failed to qualify for the debate.
Karpf said he also believes it potentially poses risks to Hutchinson.
"If I was advising a candidate, I would tell them, 'Do not build a chatbot that people can ask a bunch of questions of and then screenshot the answer,'" he said. "While there may be some earnest voters who go there and ask the chatbot questions, the bigger use case [will] be opposition researchers getting it to say something in your voice."
Risks AI poses to 2024 election
AI is going to propel disinformation during the 2024 presidential campaign, and it's a bipartisan issue both Democrats and Republicans should be concerned about, West said.
The main concern is that false images, videos and audio generated by AI or deep fakes will target presidential candidates and affect voter perceptions. West said there's a risk that the election could be decided based on fake information.
Darrell WestSenior fellow, Brookings Institution Center for Technology Innovation
"Disinformation is going to be a huge problem in the campaign, and AI brings very powerful tools down to the level of almost anyone," West said. "There's a risk we're going to see a tsunami of disinformation in the campaign, and it's going to be hard for people to distinguish the real from the fake."
West said social media platforms in particular need to "take content moderation seriously," but he noted that some social media platforms have been going in the opposite direction. X, formerly known as Twitter, laid off its trust and safety divisions last year, while Meta and Google made cuts to content moderation staff. Indeed, GWU's Karpf said if the public and Congress are expecting social media platforms to manage the likely increase of disinformation during the campaign, that's not going to happen.
West said he believes AI is going to make politics more "extreme, radical and polarized."
When asked about risks posed by AI, the Ask Asa tool listed data privacy, the potential to displace jobs, bias in decision-making and lack of accountability as some of Hutchinson's top concerns. It did not list disinformation as a risk.
"While AI presents many exciting opportunities, we must be mindful of these risks and work towards mitigating them," the Ask Asa tool stated. "By investing in education and retraining programs, promoting ethical AI development and establishing clear guidelines for data privacy and security, we can create a future where AI benefits everyone."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.