Getty Images

Former GOP candidate's AI chatbot fielded policy questions

Former presidential candidate Asa Hutchinson used an AI chatbot to provide insight on his policy stances, a development that both impresses and worries tech and media experts.

A former U.S. presidential candidate tried to capitalize on the artificial intelligence momentum by offering access to an AI chatbot that could address policy questions on his behalf. However, the risks AI poses to the 2024 election are vast.

Should the U.S. regulate AI? What about regulating big tech? What should the U.S. approach to climate policy entail? These are examples of questions former GOP presidential candidate Asa Hutchinson's now defunct AI chatbot could address through a natural language interface. On Jan. 16, Hutchinson, former governor of Arkansas, suspended his campaign.

Hutchinson unveiled the tool in September before the second Republican presidential candidate debate. He failed to qualify for the debate after he said he fell short of the polling requirement necessary for participation. Since he wouldn't be able to answer questions onstage, he asked his team to build the Ask Asa AI chatbot so the public could ask questions. 

The Ask Asa tool was preprogrammed with Hutchinson's past remarks, interviews and speeches; generated responses to policy questions; and provided information on position stances. The website did not provide information as to what powers the interface. It included a disclaimer that the tool was "meant for informational and educational purposes only" and that responses weren't attributable verbatim to Hutchinson. His campaign team did not respond to a request for comment for this story. 

Hutchinson's use of artificial intelligence came amid a growing global debate about AI policy and how governments should regulate the technology to protect consumers from harm. Large language models in particular, such as those created by generative AI model developer OpenAI, consume vast amounts of consumer data and energy to provide its outputs.

"By learning more about AI and utilizing it in our campaign, we are ensuring that my vision for America can be accessed by as many Americans as possible," Hutchinson wrote in a press release announcing the Ask Asa AI chatbot.

Hutchinson was the only presidential candidate to unveil such a tool. While it did provide a glimpse into his policy stances, it was a rare positive example of AI use during a campaign year that will be threatened by disinformation generated by AI, said Darrell West, a senior fellow at the Brookings Institution Center for Technology Innovation.

GOP presidential candidate Asa Hutchinson's AI chatbot answers policy questions about his stance on AI regulation, climate, China and other topics.
Former GOP presidential candidate Asa Hutchinson released an AI chatbot to answer policy questions from the public.

Hutchinson's AI use gets mixed reviews

West said he was impressed with the thoroughness of the Ask Asa chatbot's responses. Additionally, West said Hutchinson's experience with the AI chatbot and understanding the positive aspects of AI will be important from a policy standpoint, particularly as AI legislation is introduced in Congress.

"This is a very constructive use of AI," he said. "There are many problematic applications that are taking place in the campaign, but this is one that tries to put accurate information out to the general public. I applaud the governor for coming up with this tool."

David Karpf, associate professor at the George Washington University School of Media and Public Affairs, however, said he was not impressed with Hutchinson's AI chatbot. Karpf called Ask Asa a novelty attempting to garner public interest after Hutchinson failed to qualify for the debate.

Karpf said he also believes it potentially posed risks to Hutchinson.

"If I was advising a candidate, I would tell them, 'Do not build a chatbot that people can ask a bunch of questions of and then screenshot the answer,'" he said. "While there may be some earnest voters who go there and ask the chatbot questions, the bigger use case [will] be opposition researchers getting it to say something in your voice."

Risks AI poses to 2024 election

AI is going to propel disinformation during the 2024 presidential campaign, and it's a bipartisan issue both Democrats and Republicans should be concerned about, West said.

The main concern is that false images, videos and audio generated by AI or deep fakes will target presidential candidates and affect voter perceptions. West said there's a risk that the election could be decided based on fake information.

Disinformation is going to be a huge problem in the campaign, and AI brings very powerful tools down to the level of almost anyone.
Darrell WestSenior fellow, Brookings Institution Center for Technology Innovation

"Disinformation is going to be a huge problem in the campaign, and AI brings very powerful tools down to the level of almost anyone," West said. "There's a risk we're going to see a tsunami of disinformation in the campaign, and it's going to be hard for people to distinguish the real from the fake."

West said social media platforms in particular need to "take content moderation seriously," but he noted that some social media platforms have been going in the opposite direction. X, formerly known as Twitter, laid off its trust and safety divisions last year, while Meta and Google made cuts to content moderation staff. Indeed, GWU's Karpf said if the public and Congress are expecting social media platforms to manage the likely increase of disinformation during the campaign, that's not going to happen.

West said he believes AI is going to make politics more "extreme, radical and polarized."

When asked about risks posed by AI, the Ask Asa tool listed data privacy, the potential to displace jobs, bias in decision-making and lack of accountability as some of Hutchinson's top concerns. It did not list disinformation as a risk.

"While AI presents many exciting opportunities, we must be mindful of these risks and work towards mitigating them," the Ask Asa tool stated. "By investing in education and retraining programs, promoting ethical AI development and establishing clear guidelines for data privacy and security, we can create a future where AI benefits everyone."

Editor's note: This story was originally published 11/20/23.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget Editorial, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

Next Steps

AI, the 2024 U.S. election and the spread of disinformation

Dig Deeper on CIO strategy

Cloud Computing
Mobile Computing
Data Center
Sustainability
and ESG
Close