E Point Perfect
Law \ Legal

The Biden Administration’s Blue Print for AI Bill of Rights Envisions New Obligations in an Increasingly Automated World


The White House’s Office of Science and Technology Policy has identified a framework of five principles, also known as the Blueprint for an AI Bill of Rights.  The principles are intended to guide the design, use, and deployment of automated systems and artificial intelligence for greater public protection.  The Blueprint defines automated systems broadly as any system, software, or process that uses computation to determine outcomes, make or aid decisions, inform implementation, collect data, or otherwise interact with individuals or communities. 

The framework is intended to apply to (1) automated systems (2) that have the potential to impact the public’s rights, opportunities, or access to critical resources or services.  The White House specifically indicated that the framework should apply to equal opportunities in housing, credit, employment, and financial services.  While the Blueprint does not yet create new requirements for developers, designers, and deployers of automated systems, it may overlap with existing laws such as civil rights and protections against discrimination.  “Deployer,” as used in the framework, appears to mean any entity which deploys or uses an automated system such as an AI interface or automated calling system.  The Blueprint may also be a signal of executive orders and regulations soon to come.

The framework’s principles consist of:

  1. Safe and Effective Systems

The first principle provides that the public should be protected from unsafe or ineffective systems.  To that end:

  • All automated systems should be developed with consultation from diverse communities to identify risks, concerns, and impact.
  • The systems should undergo pre-deployment testing.
  • Design should focus on preventing issues such as irrelevant data usage.
  • Developers should use independent evaluation and reporting to confirm safety and effectiveness of systems to mitigate potential harms.

2. Algorithmic Discrimination Protections

The second principle is that automated systems should be used and designed in an equitable way to prevent algorithmic discrimination.  For instance, measures should be taken to prevent unfavorable outcomes based on an individual’s:

  • race,
  • color,
  • ethnicity,
  • sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation),
  • religion,
  • age,
  • national origin,
  • disability,
  • veteran status,
  • genetic information, or
  • any other classification protected by law

Algorithmic discrimination is defined as unjustified different treatment which disfavors people based on any of these characteristics.   The White House also called for ongoing assessments based on use of representative data to prevent proxies for demographic features, such as algorithmic impact assessments.  Results for these assessments should be made available to the public when possible.

3. Data Privacy

The third principle calls for protection against purported abusive data practices.  For instance, the White House stated the public has a reasonable expectation of privacy and data should be strictly used in the context it was collected.  System deployers should seek permission for the collection, use, access, transfer, and deletion of data to the greatest extent possible.  All consent requests should be brief, in plain language, and offer choices for specific contexts of use.

4. Notice and Explanation

The fourth principle calls for  up-to-date notice of how automated systems are being used and their impact on the public to be provided regularly.  It is unclear what methods of notice are necessary.

5. Human Alternatives, Consideration, and Fallback

The fifth principle provides that consumers should be able to opt-out of automated systems and have access to a person who can quickly remedy issues.  Reasonable expectations should be considered to determine when a human alternative must be provided.  Those expectations should focus on protecting the public from harmful impacts. 

Reporting that includes a description of human governance processes, accessibility, outcomes, and effectiveness should be made publicly available when possible.


Source link

Related posts

California Court of Appeal rules shareholders’ flow-through S corporation intangible income is apportionable, not sourced to shareholders’ domiciles

Published in the OJ – Corrigendum to Commission Implementing Regulation (EU) 2022/1860

Lots of news and notes about federal prisons as leadership transitions

Blue Bell president” Ricky” Dickson testifies as government witness against his “friend”

Cramming Down Pension Liabilities: The Final Frontier?

Ding Dong the Wicked Insurance Witch Is Dead! Florida’s Insurance Commissioner Resigns!