AI Bill of Rights: Everything You Need to Know

Download This Article as a PDF
Loading ...

No longer futuristic or theoretical, artificial intelligence (AI) has quickly become a part of our personal and professional lives. From making everyday tasks faster to improving the speed of medical treatment to transforming the efficiency of legal practices, the benefits of AI are wide-ranging and growing all the time.

However, while AI is rapidly unlocking many advantages for people and businesses, powerful technology without boundaries can also bring harm—and AI is no exception. As the possibilities of AI continue to emerge, so too do potential ethical and legal risks like data privacy issues, reproduction or amplification of bias and discrimination, and pervasive activity tracking.

With all this in mind, the US has created an artificial intelligence bill of rights to serve as a framework to help address potential risks and opportunities of AI.

In October 2022, the White House Office of Science and Technology Policy (OSTP) published The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. Also known as the “Blueprint for an AI Bill of Rights” or the “AI Bill of Rights,” this document outlines principles for the responsible development and implementation of AI systems. It aims to serve as a guide to help protect people from the potential threats of AI.

But what is in the AI Bill of Rights, and what does it mean for our society and the future of AI? In this post, we’ll cover the core principles of the AI Bill of Rights, as well as thoughts on the evolving landscape of AI regulation.

What is an AI Bill of Rights?

As outlined by the OSTP, the Blueprint for an AI Bill of Rights identifies five core principles and associated practices “that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.”

These core principles offer guidelines (as well as accompanying steps and examples of how to move the protections from principle into policy and practice) for making AI systems safer for users, more equitable, and more transparent.

The Blueprint for an AI Bill of Rights’ framework applies to systems that are:

  1. Automated.
  2. Could potentially meaningfully impact the rights, opportunities, or access to critical resources or services of the American public.

Focused on people and their civil and human rights in relation to AI, the AI Bill of Rights was created as a response to the experiences of Americans. The insights informing the guidelines were garnered from a variety of parties, including researchers, technologists, advocates, journalists, and policymakers.

While the principles outlined in the Blueprint for an AI Bill of Rights are currently guidelines—not law—they offer a framework guiding the responsible use of AI and may provide insight into the potential direction of future AI regulation in the US.

Key principles of the AI Bill of Rights

The Blueprint for an AI Bill of Rights outlines a set of five principles (and associated practices) that can be applied to AI systems to help mitigate potential risk and harm to the public.

Considered together, the principles aim to enhance AI systems in areas including:

  • Transparency and explainability.
  • Accountability and responsibility.
  • Non-discrimination and fairness.

The five key principles of the AI Bill of Rights are:

1. Safe and effective systems

“You should be protected from unsafe or ineffective systems.”

The Safe and Effective Systems principle asserts that people deserve protection from unsafe or ineffective automated systems.

The Blueprint suggests that this principle can be followed by taking steps including:

  • Consulting with diverse communities, stakeholders, and domain experts to identify potential concerns, risks, and impacts.
  • Pre-deployment testing.
  • On-going monitoring.

2. Algorithmic discrimination

“You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”

The Algorithmic Discrimination Protections principle says that AI systems should be designed to prevent algorithmic discrimination, which occurs when automated systems contribute to unjustified different or unfair treatment of certain people due to biased training data.

For example, if an AI system that helps predict which healthcare patients will need extra medical care uses an algorithm that relies on a variable that’s correlated with race, as was found to be the case in this study, then this showcases algorithmic discrimination.

The Blueprint suggests that this principle can be followed by implementing proactive provisions that prioritize peoples’ civil rights and equality, such as:

  • Implementing equity assessments as part of the system design.
  • Using representative data.
  • Protecting against proxies for demographic features.
  • Ensuring accessibility for people with disabilities.
  • Disparity testing and mitigation.
  • Organizational oversight.

3. Data privacy

“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.”

The Data Privacy principle asserts that people’s data privacy should be protected and respected. In addition to ensuring user consent for data collection and use, AI systems should be designed so that data privacy protections are included by default and there are safeguards in place against abusive data practices.

The Blueprint suggests that this principle can be followed by steps like:

  • Taking measures to ensure that data collection follows reasonable expectations.
  • Only collecting data that’s strictly necessary for the system’s specific context.
  • Seeking user permission (and following those user decisions) for data collection, use, access, transfer, and deletion of data in appropriate ways.
  • Using alternative privacy by design safeguards when necessary.
  • Ensuring consent requests for data collection are brief, easy to understand, and give users agency.
  • Using enhanced protections and restrictions for data and inferences pertaining to sensitive domains and for data related to youth.
  • Ensuring people and their communities are free from unchecked surveillance and that surveillance technologies have heightened oversight.
  • Not using continuous surveillance and monitoring in contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.
  • Providing, where possible, access to reporting confirming that data decisions have been respected.

4. Notice and explanation

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”

The Notice and Explanation principle says that automated and AI systems should provide clear, accessible, easy-to-understand, and timely notice of use and explanations.

The Blueprint suggests that this principle can be followed by using strategies like:

  • Providing generally accessible documentation, in plain language, describing the overall system (including how it works and how any automated component is used for actions or decision-making).
  • Ensuring notices are kept up-to-date and that people impacted by the system are notified for use case of functionality changes.
  • Clearly explaining the reasons why and how, a decision was made.
  • Taking steps to ensure transparency and accountability.

5. Human alternatives, consideration, and fallback

“You should be able to opt-out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

Finally, the principle of Human Alternatives, Consideration, and Fallback says that people should have the option of opting out of an automated option and switching to a human alternative, when appropriate (based on reasonable expectations for the given context). According to the AI Bill of Rights, there may be certain instances where a human or alternative could be required by law.

The Blueprint suggests that this general principle can be followed by measures such as:

  • Prioritizing accessibility and protecting people from harmful impacts.
  • Providing timely access to human consideration and remedy should an automated system fail, produce an error, or if the user wants to appeal or contest the system’s impact on them.

Considerations for enforcement and legislation of an AI Bill of Rights

While the Blueprint for an AI Bill of Rights provides a roadmap for the responsible use of automated systems and AI in the US, enforcing it can be challenging.

Specifically, because the AI Bill of Rights is a framework—not actual legislation—it is not legally binding or enforceable by law. So, while it offers ethical guidance for those developing and deploying AI tools and systems, there are no legal repercussions for failure to follow this guidance.

Still, while there is currently no federal law restricting the use of AI or protecting citizens from the use of AI in the US, there are existing additional federal guidelines, as well as many state-level laws and initiatives.

Federally, for example, President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (or AI Executive Order) in October of 2023. Designed to protect Americans from many of the potential risks of AI systems, as we outline in more detail in our blog post, the AI Executive Order directs multiple actions, including that:

  • Developers of AI systems must share safety test results with the US government if the results show that the system could pose a risk to national security.
  • The National Institute of Standards and Technology will establish guidelines to develop standards, tools, and tests to help ensure the safety, security, and trustworthiness of AI systems.
  • Standards and best practices will be set to detect AI-generated content, authenticate official (“real”) content, and thus help protect Americans from AI-enabled fraud.
  • Advanced cybersecurity programs will be established to develop AI tools that can find and fix vulnerabilities in critical software.

Additionally, delivering on the AI Executive Order, Vice President Harris announced on March 28, 2024, that the White House Office of Management and Budget (OMB) was issuing the first government-wide policy to mitigate AI risks and harness AI benefits.

Also, at the state level, many states are taking proactive action when it comes to AI-related legislations. A growing number of US states have created laws and initiatives addressing specific issues related to AI.

A few examples of state-level AI laws and regulations include:

  • Colorado’s law regulating how insurers can use big data and AI-powered predictive models, in an effort to protect consumers from unfair discrimination.
  • California’s bill banning the use of chatbots for interacting with California consumers to pretend they’re human to try to sell goods or services, or to influence votes without disclosure.
  • Illinois’ act establishing specific parameters for the use of AI in the hiring process.

Impact on society and individuals

Why do AI regulations like the AI Bill of Rights matter?

The reality is that artificial intelligence is incredibly powerful today, and it will certainly be even more so in the future. While AI systems can benefit humans in positive ways, unfortunately, it can also cause harm to individuals and society at large—especially when developed without guidelines or regulation.

With this in mind, it’s important for people, and regulators, to take ethical considerations in AI development into account. These include:

  • Bias, discrimination, and fairness: When AI algorithms are trained on data that includes biased information, it can reproduce and even amplify unfair bias and discrimination.
  • Protection of privacy and personal data: AI models are built on using large amounts of data, which can include personal data. As AI technology grows, this can lead to concerns about how data is collected, used, and stored.
  • Accuracy and misinformation: When relying on algorithms to make decisions or source information, it can be hard to be sure if AI outputs are accurate or trustworthy.
  • Accountability: When an error in AI occurs, especially when it causes harm or negative impact to people, who is at fault?

Future of AI regulation and AI legal issues

Just as AI technology is rapidly advancing, AI regulation is quickly unfolding and evolving in turn.

As we’ve outlined, in the US, regulations and policies like the AI Bill of Rights, the AI Executive Order, and numerous state and local-level policies work to mitigate AI risks and guide the responsible use of AI. And, the emergence of new policies, such as the OMB’s recent government-wide policy, suggest that further AI regulation and initiatives are likely on the horizon.

Similarly, other governments around the world are developing strategies for handling, researching, and regulating the use of AI. Some examples include:

European Union

On March 13, 2024, for example, policymakers in the European Union passed the Artificial Intelligence Act (AI Act), which follows a risk-based approach to assessing guidance for AI systems based on risk to factors like people’s health, safety, and rights. This approach categorizes AI applications in one of four different levels of restrictions and requirements, including “minimal risk, ” “limited risk,” “high risk,” and “unacceptable risk” (which are banned).

China

China has also released regulations providing guidance on developing generative AI systems. Issued by the Cybersecurity Administration of China (CAC) and government regulators, the Interim Measures for the Management of Generative Artificial Intelligence Services were finalized in July 2023.

Conclusions on an AI Bill of Rights

Artificial intelligence is becoming increasingly prevalent in more and more aspects of daily life, offering a mix of potential benefits, and, unfortunately, potential risks.

By providing comprehensive guidelines for the responsible development and deployment of AI systems in the US, the Blueprint for an AI Bill of Rights strives to protect people and their rights. While this framework is nonbinding, it does establish a foundation for responsible and ethical AI systems development today and in the future—which is crucial as AI tools and systems are quickly becoming more and more essential across many industries.

In the legal industry, for example, the ethical use of AI for lawyers is rapidly transforming how legal professionals work.

Case in point? Our forthcoming AI functionality, Clio Duo, will be built on the foundation of our proprietary AI technology—and our platform-wide principle of protecting sensitive legal data and privileged legal communication—while adhering to the highest security, compliance, and privacy standards held throughout Clio’s entire operating system.

Learn more about Clio Duo, and how the AI solution prioritizes privacy and security, here.

Explore AI insights in our latest report

Our latest Legal Trends Report explores the shifting attitudes toward AI in the legal profession and the opportunities it brings for law firm billing, marketing, and more.

Read the report