E Point Perfect – Interesting and beneficial content
Law \ Legal

NIST Releases Its Artificial Intelligence Risk Management Framework (AI RMF)

[ad_1]

On January 26, 2023, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework (AI RMF). The AI RMF is intended to provide a resource to organizations designing, developing, deploying, or using AI systems to manage risks and promote trustworthy and responsible development and use of AI systems. Although compliance with the AI RMF is voluntary, given regulators’ increased scrutiny of AI, the AI RMF can help companies looking for practical tips on how to manage AI risks.

The AI RMF begins with a discussion of the harms that AI risk management systems should seek to address, which include:

  • harms to people, such as harm to individual civil liberties, safety, or economic opportunity; harm to distinct populations or subgroups in the form of discrimination against protected classes; harms to society, including harms to democratic participation or educational access;
  • harms to an organization’s business operations, security, or reputation; and
  • harms to ecosystems, such as the global financial system, supply chain, natural resources, environment, and planet.

The AI RMF identifies the following characteristics of trustworthy AI systems:

  • Validity and Reliability. Accuracy and robustness of an AI system contribute to its validity. AI risk management efforts should prioritize the minimization of potential negative impacts and may need to include human intervention in cases where the AI system cannot detect or correct errors.
  • Safety. AI systems should not cause physical or psychological harm or lead to a state in which human life, health, property, or the environment is endangered.
  • Security and Resilience. Resilience is about the ability to return to normal function after an adverse or unexpected event, while security looks to the ability to avoid, protect against, or recover from an attack.
  • Accountability and Transparency. Developers of AI systems should test different types of transparency tools.
  • Explainability and Interpretability. Explainable and interpretable AI systems can suggest information that will help end users understand the purposes and potential impact of an AI system.
  • Privacy. Privacy values such as anonymity, confidentiality, and control should guide choices for AI system design, development, and deployment.
  • Fairnesswith Harmful Bias Managed. Bias is broader than demographic balance and data representativeness. For example, systems in which predictions are somewhat balanced across demographic groups may still be inaccessible to individuals with disabilities or affected by the digital divide. The AI RMF divides bias into three categories: systemic bias, computational and statistical biases, which can result from systematic errors due to, for example, nonrepresentative samples; and human-cognitive bias, such as bias in how an individual or group perceives AI system information to make a decision or fill in missing information.

These characteristics are also set forth in other AI legal frameworks that are being developed across the globe, such as the EU’s draft AI Act.

Risk Management Through the AI RMF Core

As an integral part of the framework, NIST outlines four core functions to help companies identify practical steps to manage their AI risk and help to ensure their AI systems include all of the characteristics of a trustworthy AI system:

  • Govern. Organizations must cultivate a risk management culture across the lifecycle of AI systems, including by implementing appropriate structures, policies, and processes. Risk management must be a priority for senior leadership.
  • Map. The “map” function establishes the context to frame risks related to an AI system. Under this function, organizations are encouraged to categorize their AI systems, establish goals, costs, and benefits compared to benchmarks, map risks, and benefits for all components of the AI system, and examine impacts to individuals, groups, communities, organizations, and society.
  • Measure. Using quantitative and qualitative risk assessment methods, businesses should analyze AI systems for trustworthiness.
  • Manage. Identified risks must be managed, prioritizing higher-risk AI systems. Risk monitoring should be an iterative process, and post-deployment monitoring is crucial given that new and unforeseen risks can emerge.

NIST highlights that governance structures to map, measure, and manage risk should be “continuous, timely, and performed throughout” the lifecycle of creating, implementing, deploying, and monitoring an algorithm, but the specific implementation of these functions is intended to be adapted to each business model.

For more information on implementing NIST’s AI RMF or otherwise implementing an AI risk management program, please contact Laura De BoelManeesha Mithal, or any other attorney from Wilson Sonsini’s privacy and cybersecurity practice or artificial intelligence and machine learning practice.

Stacy Okoro contributed to the preparation of this post.

[ad_2]

Source link

Related posts

Housing Legislation Update 2023 – LexBlog

$46 Million Organic Grain Fraud Scheme Originated in Cottonwood County, Minnesota

Seyfarth’s 2023 EEOC-Initiated Litigation Report Focuses on Big Changes at the EEOC

Third Circuit Dismisses Talc Bankruptcy

Health Care Beat Podcast: Antitrust Enforcement of Employment Agreements

Live Wire: The law, Orthodox Judaism, and Minnesota’s eruvim