Skip to content

New NIST Security Guidelines for AI Systems

On January 26, 2023, the National Institute for Standards and Technology ("NIST") published the first version of its AI Risk Management Framework ("AI RMF"). AI RMF was developed to help companies that develop and utilize AI to build security into the development life cycle of AI systems, meaning an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.

Although AI RMF is not binding law, the framework will likely influence the approach that US regulators take in AI security-related investigations. As with the general NIST framework, AI RMF is likely to become the baseline for assessing security risks in AI systems.

AI RMF has four core functions (govern, map, measure and manage), that are designed to address and mitigate the harms uniquely posed by AI systems, including:  (1) harm to people (such as civil liberties, safety and economic opportunity), groups/communities (such as discrimination against a population sub-group), and societal harms (such as demographic participation or educational access); (2) harm to organizations (such as business operations, security breaches/monetary loss, and harm to reputation); and (3) harm to ecosystems (such as interconnected elements and recourses, global financial systems, supply chains or natural resources).

Govern

The core purpose of the govern function is to implement a culture of risk management within companies developing, acquiring or using AI systems.  Companies must implement processes and policies that anticipate, identify and manage the risks posed by AI systems.  The technical aspects of AI system design and development must be mapped to organizational values and principles, and should be continually updated throughout the life of the AI system.

Companies must also implement accountability structures that clearly delineate roles and responsibilities and lines of communication regarding AI risks, and frequent training for employees, consultants and partners/vendors.  Executive leadership should be responsible for the decisions relating to risks associated with AI systems.

Companies must also implement policies and procedures to address AI risks relating to third-party software, data and related supply chain issues.

Map

The mapping function serves to enable individuals in different divisions of a company to have full visibility into all parts of the AI system, and the information gathered via implementing the mapping function enables a company to better identify risks and prevent harm.

The first step in the mapping process is to understand the context in which the AI system is proposed to be used – the benefits and the risks to individuals, communities, organizations and society are weighed against the company’s goals and business value.  Organizational risk tolerance is determined and documented.  High-level design decisions are made with respect to the AI system, to minimize adverse risks.

The company then performs a categorization of the AI system and maps the risks and benefits for all components of the AI system.  The company must carefully investigate and document the likelihood and magnitude of both beneficial and harmful impacts of the AI system.

AI RMF further requires companies to prioritize diversity, equity, inclusion and accessibility processes in mapping and measuring AI systems risks.  Decision-making related to AI risks must be made by a diverse team that’s diverse across demographics, disciplines, experience, expertise and backgrounds.

Measure

The measure function is the bridge between the mapping function and the manage function – it utilizes quantitative and qualitative tools to analyze, assess, benchmark and monitor AI risks identified in the mapping function, which in turn informs the manage function.  The measure function should include, without limitation, software testing, performance assessment methodologies, comparisons to performance benchmarks, and formalized reporting and documentation of results.  An independent review should be implemented to minimize the influence of any internal biases or potential conflicts of interest. 

First the company determines which methods and metrics are appropriate for evaluating the AI system, and then these methods and metrics are used to evaluate the AI system to determine trustworthiness.  The company must put in place mechanisms for tracking AI risks over time.  Finally, the company assesses the efficacy of the data gathered via the measurement process.

Manage

Companies utilize information obtained in the measure function to decrease the likelihood of AI system failures and negative impacts.  First, companies must prioritize risks based on potential adverse impacts and then develop and deploy strategies to maximize AI benefits and minimize these adverse risks.  Companies must regularly monitor AI risks from third-party resources and pre-trained models used for development.  After the risk management strategies are deployed, measurable activities for continual improvements must be integrated into AI system updates, and incidents and errors must be communicated to relevant parties, including impacted communities.  Processes for tracking, responding to, and recovering from incidences must be followed and documented.

Implementation of AI RMF

AI RMF mandates that the four core functions be carried out throughout the AI system lifecycle in a manner that reflects diverse and multidisciplinary perspectives, including from third parties outside the company.  Having a diverse team promotes open sharing of ideas which in turn helps to more rapidly reveal problems and identify risks.  

For more information about AI RMF and how to comply, please contact: Helen Christakos.