sdecoret - stock.adobe.com
The National Institute of Standards and Technology is working on guidance for businesses detailing safe, responsible measures for building, deploying and testing generative artificial intelligence tools.
U.S. Secretary of Commerce Gina Raimondo announced in June that NIST is launching a new working group on AI specifically focused on generative AI tools that create photos, videos, text and other content, like OpenAI's ChatGPT. The group will build off the NIST AI Risk Management Framework (RMF) released in January to address the risks and opportunities associated with generative AI.
The NIST AI RMF serves as broad guidance to actors across the artificial intelligence lifecycle -- from development to procurement to deployment -- on identifying and managing AI risks and promoting responsible AI use.
In a Q&A, NIST's Elham Tabassi, chief of staff in the Information Technology Laboratory and author of the NIST AI RMF, described the 18-month long process behind creating the AI RMF and the challenges of creating a section specifically geared toward generative AI.
Editor's note: The following was edited for length and clarity.
Who was involved with creating the NIST AI RMF?
Elham Tabassi: We are a technical agency. A lot of us, including myself, are engineers and computer scientists. We made a purposeful and intentional effort to reach out to psychologists, sociologists, and people also thinking about the impact of the system. One of the best listening sessions we did was with the students at Howard University. Some of them were building the technology and, at the same time, looking at how it works for them or not.
How crucial was it to create the NIST AI RMF for businesses?
Tabassi: Around the globe, there is a consensus that AI technologies have tremendous beneficial use. At the same time, they come with risk -- sometimes high risk. There are also many policy discussions about what can be done to put those safeguards in. Everybody is working on this.
[Companies claim], 'We know how to do this. We know how to do the testing to ensure our products are safe and our systems are not biased.' But we don't know what they are doing. There isn't any standardized, interoperable way to do the testing.
It's good for the laws and policies to say AI needs to be safe and trustworthy; it's good that the AI actors and stakeholders are doing something. What we need is more transparency on the things that are being done and advancing research for how to do these things in a scientifically valid way. At the end, it all becomes about building confidence and trust in the use of AI systems.
How are you building on the NIST AI RMF?
Tabassi: What we're going to do with this is provide more specific guidance for generative AI. One of the biggest things is [creating] a standardized way of verifying and validating the models before they come out to the market for the people that are developing these things and putting them out. There is a need for a more interoperable and standardized way of reporting on the testing they've done to improve transparency and accountability. That's the first thing this public working group will do and will continue doing. To run evaluations and do the testing, first, you need to learn what it is you want to test. Then you have to learn how to do the testing and then do the actual testing. That's exactly what we're trying to do.
What is the biggest challenge that remains ahead?
Tabassi: The biggest challenge is doing it fast and doing it right. We want the solutions to AI, the safeguards to AI, the measurement science and the methodologies to ensure AI systems are safe, fair and trustworthy. But we don't have that. We don't have the science and technology there.
NIST is known for its quantitative measurement and evaluations. In the evaluations and standards work we do, we get the system or algorithm out of its context of use, bring it into the lab, run some data, get the results out, do analysis and say something about accuracy. These things are not going to work for AI systems because AI systems are data, compute and algorithm in a complex relationship with the environment and humans. How to do that testing in the right context of use and environment and try to understand and measure the impact … we don't have much expertise on how to do this. The whole community doesn't have much expertise to do this. Those measurements and those evaluations were needed yesterday.
What is NIST's role in assisting policymakers who are working on AI regulations?
Tabassi: NIST is a non-regulatory agency, so we are agnostic on the regulations and draft of regulations. But our job is to provide technical contributions that can help with evidence-based policymaking. Regulations don't get into how safety is defined and what testing is to be done. We are trying to build the scientific underpinning for technically sound specifications, guidance or standards that become the subject of the regulations.
How much will guidance like this help businesses using generative AI?
Tabassi: If companies have easier and more science-backed methods to test their efforts, it helps them innovate and push AI advances forward in a responsible and trustworthy way.
Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.