Insights

AI_Standards_Risk_Management_SOCIAL

U.S. National Institute of Standards and Technology Releases AI Risk Management Framework

The National Institute of Standards and Technology ("NIST") has released its AI Risk Management Framework ("AI RMF") as a resource to reportedly assist individuals, organizations, and society identify risks associated with artificial intelligence ("AI").

On January 26, 2023, NIST released the first version of the AI RMF, along with the NIST AI RMF Playbook, the AI RMF Explainer Video, the AI RMF Roadmap, the AI RMF Crosswalk, and various Perspectives. NIST reports that the framework is aimed to assist individuals, organizations, and society better manage risks associated with AI.

The AI RMF seeks to promote trustworthy and responsible AI by assisting organizations in their design, development, deployment, and use of AI systems, taking into consideration both the potential for AI to drive positive scientific advancements, but also the potential risks associated that could negatively impact individuals, groups, society, and the planet.

The AI RMF is broken up into sections that provide practical guidance and often-overlooked considerations that companies should consider when developing AI. For example, the "Framing Risk" section includes a discussion on risk prioritization that explains how unrealistic expectations can cause an organization to inefficiently allocate resources to combat risks when the organization does not know how to assess risks and properly prioritize. Another example are the characteristics of a trustworthy AI system: Valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair. NIST takes the position that detailing these characteristics will provide companies with a basis for developing or enhancing AI trustworthiness to reduce risk.

The AI RMF then goes into detailed descriptions of its four functions: Govern, Map, Measure, and Manage. These functions are identified as ways to assist companies in addressing AI risks in practice. "Govern" refers to the structures, systems, processes, and teams that assist organizations to develop a purpose-driven culture focused on risk understanding and management. "Map" is intended to enhance a company's ability to identify risks and their broader contributing factors. "Measure" is meant to analyze, assess, benchmark, and monitor risk and related impacts. "Manage" refers to the process of regularly allocating risk resources to mapped and measured risks.

In all, the AI RMF provides AI users with a voluntary methodology to evaluate the adoption of AI systems and to regulate deployed AI systems.

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.