Businesses aren't the only ones wanting to take advantage of powerful new artificial intelligence tools as the federal government focuses on the prospect of AI for U.S. Department of Defense operations and U.S. technological competitiveness.
With the development of large language models like OpenAI's ChatGPT, U.S. senators pointed out the advantages of researching the possibility of implementing AI tools in weapons systems and across DOD processes to ensure national security and economic competitiveness with foreign adversaries.
Breakthroughs in AI are already transforming the military's capability and reshaping the character of warfare, particularly in the cyber domain, said Sen. Mike Rounds, R-S.D., ranking member of the U.S. Senate Subcommittee on Cybersecurity, during a committee hearing on Wednesday. Rounds also cautioned that, while AI will benefit the U.S., it also benefits foreign adversaries. China has focused its future military strategy on AI too.
The Biden administration has already taken action to stop advanced technologies from reaching countries like China by implementing export controls on AI chips in October 2022, followed by export controls around semiconductor manufacturing equipment.
"Mitigating adversarial AI will be key to winning the race for global AI leadership and securing the United States' technological dominance in this important field," Rounds said during the hearing.
While senators during Wednesday's hearing pointed out that taking advantage of AI includes addressing AI concerns through potential policies and regulations, Rounds took issue with calls to pause AI development due to societal risks outlined in an open letter signed by Elon Musk earlier this month.
"The greater risk is taking a pause while our competitors leap ahead of us in this field," Rounds said during the hearing. "AI will be the determining factor in all future great power competition, and I don't believe now is the time for the United States to take a break on developing our AI capabilities."
Pausing AI development would be a setback for U.S.
Pausing AI development in the U.S. would also mean pausing work on developing ethical implementation standards and federal policies for constantly evolving iterations of the technology, said Jason Matheny, president and CEO of the nonprofit research firm RAND Corporation and a commissioner of the National Security Commission on Artificial Intelligence. Matheny spoke as a witness during Wednesday's hearing.
Sen. Mike Rounds, R-S.D.
Matheny said as the U.S. seeks to research and understand AI, it could be a leader in setting much-needed guardrails around AI use. The U.S. Department of Commerce recently issued a request for comment on AI accountability measures.
Pausing efforts on AI development like what Musk has called for would hinder business and government from understanding what sort of guardrails are needed for the technology.
"It's unclear how we would use that pause and whether we could use it effectively in a way that allows democracies to lead the norms and standards around AI and its implications for society," Matheny said during the hearing.
Shyam Sankar, chief technology officer at data analytics firm Palantir Technologies, and Josh Lospinoso, co-founder and CEO of cybersecurity firm Shift5, both witnesses during Wednesday's hearing, echoed Matheny's concerns about pausing AI development.
"If we did that, our adversaries would continue development, and we end up ceding leadership on ethics and norms on these matters if we're not continuing to develop," Lospinoso said.
Steps DOD must take before using AI tools
Though AI does hold the potential to transform industries and ensure future economic competitiveness, it still needs federal regulations, Matheny said. Integrating AI into U.S. national security plans and agencies like DOD poses special challenges, including the technology advancing rapidly and outpacing organizational and policy reforms within the federal government.
For the DOD to begin taking advantage of AI tools, Matheny said DOD cybersecurity strategies need to include tracking development in AI that could affect cyber defense and offense. Secondly, it must ensure strong export controls around leading-edge AI chips and require companies to report the development and distribution of large computing clusters or trained models above a specific size.
The Pentagon also needs to include language around customer screening before training AI models in contracts with both cloud computing providers and AI developers, Matheny added. The DOD should also focus on creating a generalizable approach to evaluate the security of AI systems before they're deployed.
Lospinoso said the DOD also needs to focus on data. Without data that has been properly prepared and organized, AI systems will be useless.
"Most major [U.S. military] weapons systems are not AI-ready," he said. "Without high quality data, we cannot build effective AI systems. Unfortunately, today, the DOD struggles to liberate even the simplest data streams from our weapons systems. These machines are talking, but the DOD is unable to hear them. We cannot employ AI-enabled technologies without great data."
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.