putilov_denis - stock.adobe.com
How to use the NIST CSF and AI RMF to address AI risks
Companies are increasingly focused on how they can use AI but are also worried about their exposure to AI-fueled cybersecurity risks. Two NIST frameworks can help.
NIST provides a stockpile of resources aimed at helping CISOs and security managers safeguard their technologies. Among them, the NIST Cybersecurity Framework and NIST Artificial Intelligence Risk Management Framework both focus on cybersecurity risks targeting AI systems. While they share some commonalities, they also have key differences.
Let's take a look at each document and examine how to use NIST frameworks for AI.
What is the NIST CSF?
The NIST Cybersecurity Framework (CSF), previously known as the Framework for Improving Critical Infrastructure Cybersecurity, is the de facto standard for cybersecurity risk management. Originating from Executive Order 13636 in 2013, NIST collaboratively created the CSF as a clear and concise approach to organize and communicate cybersecurity risk to executive leadership.
Released in 2014, the initial iteration of the CSF was a flexible and repeatable tool to help organizations of all types and sizes manage cybersecurity using the following functions:
- Identify.
- Protect.
- Detect.
- Respond.
- Recover.
The CSF 2.0, updated in 2024, added a sixth function -- govern -- to the guide. The aim is to give organizations a way to set up governance, risk and compliance (GRC) capabilities that make risk management a repeatable and measurable process from the top down.
What is the AI RMF?
NIST released the AI Risk Management Framework (AI RMF) in 2023 to, in part, "cultivate the public's trust in the design, development, use and evaluation of AI technologies and systems."
The AI RMF uses the following four functions to help CISOs and security managers organize and communicate about AI risk:
- Govern.
- Map.
- Manage.
- Measure.
These functions aim to establish GRC capabilities within an organization as it relates to AI systems.
Although the CSF and AI RMF have similar goals, the AI RMF has a slightly different scope. The AI RMF focuses on companies that develop AI software. As such, it is geared to the design, development, deployment, testing, evaluation, verification and validation of AI systems.
Most organizations, however, are not software developers; rather, they use AI as a tool to become more effective or efficient. To that end, those organizations that implement the AI RMF have to take a different approach than they do with CSF. That's not necessarily bad news. Both frameworks were designed to be flexible in their implementation and still provide a solid foundation to manage risks.
How to use the two frameworks together
The clear intersection point of the CSF and the AI RMF is their respective govern functions. Many organizations try to implement every category or subcategory across both frameworks to manage risks from a principled perspective. For well-resourced organizations with dedicated staff, such a goal is possible. But many organizations have tight budgets, and they want to implement these frameworks together.
A simple solution for CISOs and security managers is to start with a small committee of current employees to discuss technology risk on a recurring basis. This committee can use simple templates to identify, assess and manage risks. A small, diverse team brings perspective to these critical risk decisions. For example, consider AI's distinct cybersecurity risks, among them deepfakes, data leaks in AI prompts and AI hallucinations.
Once the risks are identified and analyzed for responses, take stock of the AI systems the organization has or uses. These include AI assistants, ChatGPT, Dall-E or other generative AI systems. Use an employee survey, or analyze performance data from the network monitoring system to determine systems in use. Compile a list of these systems, and use that to inform the next step.
Next, align the AI systems to the AI risks identified. This can be a simple spreadsheet that enables the organization to manage risks and assets. From there, decide what actions to take in order to mitigate the risks to the assets. This step depends on the context and risk disposition of the organization. A good place to start is to outline policies governing how employees use and interact with AI systems. Training and awareness can help reduce risk.
The NIST CSF and AI RMF are great resources to organize and communicate a technology risk portfolio. Using these NIST frameworks for AI together can appear daunting, given the size and scope of them. Yet, given the flexible nature of the two, it's doable with a small team of dedicated professionals. Use this team to identify risks, catalog assets and decide how to move forward in a strategy that works best for the organization's unique risk context.
Matthew Smith is a virtual CISO and management consultant specializing in cybersecurity risk management and AI.