When it comes to artificial intelligence (AI) and the law, a common refrain is that law cannot keep pace with the aggressive growth of technology. That may seem to be happening with AI technology and the law. AI is dominating the news. Multiple daily announcements cover AI impacting our daily lives, with no aspect seeming immune from the creep of AI.
Can the law hope to keep up with the rapid adoption and impact of AI?
And if lawmakers are trying to match the aggressive pace of AI, how can lawyers keep track of a rapidly changing regulatory landscape?
Fortunately, there are resources that can help keep track of AI regulations. Keep reading to find out which regulators are addressing AI, who is tracking those regulations, and which trackers you should follow.
What are the biggest AI trends in the legal profession?
The legal field is changing fast thanks to legal-specific AI tools such as Harvey AI, with more and more launching, seemingly, daily. AI tools are making a significant impact by automating tasks like document creation and legal research. Lawyers can now draft contracts and other documents much faster, and predictive analytics help them forecast case outcomes based on historical data (beware that AI cannot yet give legal advice and determine outcomes). This means less time spent on tedious work and more accurate results.
AI is also improving e-discovery by automatically sorting through large datasets to find relevant documents for litigation. Chatbots and virtual assistants are enhancing client interactions, answering common questions instantly, and helping with scheduling. When it comes to contracts, AI quickly spots key clauses and potential issues, making management more efficient.
As AI becomes more common, there’s a growing focus on ethical use, particularly around issues like bias and transparency. Overall, these advancements are making legal services faster, more accurate, and more efficient.
Clio Duo, a dynamic AI-powered partner for legal professionals, is coming soon! Get notified when it launches in the Fall of of 2024.
What are the legal issues with artificial intelligence and the law?
With AI being used by government agencies, major and minor corporations, educators and students, and even people in your group chat, there is no legal issue that is not being impacted.
At a high level, all legal issues with AI revolve around three key criteria.
1. What action is being performed by AI?
If AI is making automated decisions that impact a person’s rights, health, or financial well-being, there will be legal issues raised. Already, unsupervised AI tools have been found legally deficient on decisions to fire employees.
2. What data underpins AI?
Generative artificial intelligence tools rely on a vast amount of scanned data to train their algorithms. Legal issues, like whether the scanned data has been accessed properly (copyright infringement) and whether the data is fit for the intended purpose (free from illegal discriminatory bias), must be considered with generative AI. Users of AI need to know where and what their tool used for its training.
3. Who is using the AI technology?
Government agencies and regulated businesses will have stricter limitations placed on their use of AI compared to private actors. Police may have constitutional law issues when it comes to the use of facial recognition AI that an owner of a private venue may not.
These three points will need to be addressed whenever AI is being adopted. Credit decisions impacting home mortgage rates, advertising copy and product pricing, written submissions to agencies and tribunals, and more will each have to weigh how much legal risk comes with using AI.
Who is regulating artificial intelligence and the law?
Regulation of artificial intelligence is a rapidly evolving area both in the U.S. and globally.
Here’s an overview:
In the United States
AI regulation in the U.S. does not stem from a single entity but rather through a combination of federal, state, and sector-specific regulators. Some of the key players include:
- The White House Office of Science and Technology Policy (OSTP): Advises on AI policy and coordinates with other agencies. They are the agency behind the White House’s Blueprint for an AI Bill of Rights. Read more on the Bill here.
- National Institute of Standards and Technology (NIST): Works on standards for AI technologies. NIST has already released the AI Risk Management Framework (AI RMF 1.0) to help improve the ability to incorporate trustworthiness considerations into AI products, services, and systems.
- Federal Trade Commission (FTC): Focuses on consumer protection and anti-competitive behaviors in the context of AI, including new nonpublic investigatory powers involving products and services that use or claim to be produced using artificial intelligence.
- Food and Drug Administration (FDA): Regulates AI applications in medical devices and health technologies.
- Department of Transportation (DOT), including the Federal Aviation Administration (FAA) and the National Highway Traffic Safety Administration (NHTSA): Oversees AI in transportation, including autonomous vehicles and drones.
- Federal Communications Commission (FCC): Deals with AI in telecommunications, like the recent action to make AI-generated robocalls illegal.
- State Governments: Individual states may also enact laws and regulations affecting AI technologies.
Globally
AI regulation globally involves multiple international organizations and national governments, each with its own approach. Some of the key global players include:
European Union (EU): The EU is at the forefront of AI regulation, passing the comprehensive AI legislation known as the Artificial Intelligence Act, aiming to manage risks associated with AI systems. The law applies extraterritoriality, meaning it will have a global impact on AI outside of the EU.
United Nations (UN): Through various agencies, the UN addresses AI’s impact on areas like human rights, privacy, and international security.
Organization for Economic Co-operation and Development (OECD): The OECD has established AI Principles that many countries have adopted, promoting innovation while ensuring AI systems are designed in a way that respects human rights and democratic values. The OECD AI Policy Observatory (OECD.AI) reviews the over 1,000 AI policy initiatives that have been published by member states to implement the OECD AI Principles.
International Standards Organizations: Bodies like the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) work on technical standards for AI systems.
National Governments: Many countries have their own regulatory frameworks and agencies responsible for overseeing AI development and its use within their territories. For instance, China has a governance framework for AI, focusing on promoting AI while ensuring security and ethical use.
Industry Groups
Industry groups play a significant and growing role in the regulation and governance of artificial intelligence (AI), complementing the efforts of governmental and international regulatory bodies. Their involvement is critical due to the fast-paced nature of technological innovation in AI, where traditional legislative processes may lag behind technological advancements. The US AI Safety Institute Consortium (AISIC) is a group of corporations voluntarily entering into an advising role with NIST.
To regulate AI usage at your firm, it will be important to draft an AI policy—discover our AI template here.
Are there laws for artificial intelligence?
Yes, there are already laws that apply to artificial intelligence. The most recent and comprehensive is the EU AI Act.
What does the European Union AI Act do?
The EU AI Act establishes obligations based on potential risks and the level of impact of AI. Producers and users of AI technology will need to document risk management evaluation for the tools. Risk management reviews look in-depth at the three criteria listed above. Depending on the outcome of those reviews, AI tools may require users to take additional actions to reduce the risk from the tool.
The EU AI Act deems certain types of data and actions to always be high risk. Examples of high risk AI include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes. If AI is being used in these areas, the users must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.
Generative AI models with general purpose (e.g. OpenAI’s ChatGPT) must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. More powerful AI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents.
Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labeled as such.
EU citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights. These rights can also lead to a right of action in court if complaints are not addressed.
The EU AI Act comes into force twenty days after its publication in the Official Journal of the European Union, and is fully applicable 24 months after its entry into force. This is expected to be in 2026, though certain portions will become applicable earlier.
How can I keep track of emerging AI regulation?
AI regulation can come from many sources. Keeping track of all the potential regulations requires you knowing which type of regulator applies to your concerns. For example, if you’re a lawyer representing businesses in a particular industry, following the actions of your regulators at the state and federal level is appropriate.
Fortunately, there are many groups and organizations tracking and publishing AI-related regulations. These trackers are updated periodically, with each having a particular focus.
To help you find the right AI regulation tracker, here are some trackers listed below by their focus area and coverage.
AI regulation trackers
International Association of Privacy Professionals’ (IAPP) Global AI Law and Policy Tracker
Tracker focus: Global regulation by national and international governments.
Last update at publication: February, 2024.
AI’s automated decision making capacity has already been regulated by many existing privacy laws. The IAPP’s AI Governance Center has been tracking the different frameworks and approaches taken by 24 jurisdictions to date, with more expected.
Brennan Center for Justice’s Artificial Intelligence Legislation Tracker
Tracker focus: U.S. Congress legislation.
Last update at publication: March 8, 2024.
The Brennan Center’s Artificial Intelligence Legislation Tracker looks at U.S. congressional bills introduced during the current 118th Congress that would do at least one of the following:
- Impose restrictions on AI that is deemed high risk.
- Require purveyors of AI systems to conduct evaluations of the technology and its uses.
- Impose transparency, notice, and labeling requirements.
- Create or designate a regulatory authority to oversee AI.
- Protect consumers through liability measures.
- Direct the government to study AI to inform potential regulation.
Data protection bills that significantly impact AI are also included in the tracker.
Currently, there are seventy-six bills listed on this tracker.
National Conference of State Legislatures’ (NCSL) Artificial Intelligence 2024 Legislation
Tracker focus: U.S. state legislature bills with any impact on AI.
Last update at publication: March 19, 2024.
The NCSL has many trackers for various topics regulated at the state level and since 2021, they’ve issued summaries of state legislation related to AI.
The NCSL tracker lists both pending and passed legislation. The list can be filtered to your particular state, with entries also being categorized by the focus of the bills. Categories cover a variety of topics, like election interference, health data, AI provenance of data sources, private rights of action, and more.
In the 2023 legislative session, at least twenty-six states and territories introduced artificial intelligence bills, and eighteen states and Puerto Rico adopted resolutions or enacted legislation. In 2024, forty states and territories have introduced AI bills to date, with eight states and territories already enacted legislation or resolutions.
Bryan Cave Leighton Paisner (BCLP) LLP’s U.S. State-By-State AI Legislation Snapshot
Tracker focus: U.S. state legislature bills narrowly focused on AI and automated decision-making–Laws addressing biometric data, facial recognition, and sector-specific administrative laws are omitted.
Last update at publication: February 12, 2024, with a quarterly update scheduled.
Law firm practice groups are great sources for information relating to their focus. BCLP’s Goli Mahdavi, Amy de La Lama, and Christian M. Auty are publishing AI-legislation tracking for the United States on behalf of their law firm.
Using a color-coded data visualization, you can see at a glance which states have proposed and enacted AI regulations. Further details can be found by expanding each state’s information.
Final thoughts on artificial intelligence and the law
AI may be the fastest-growing technology in history. While it may seem that such growth would outpace the legal system, the opposite is true. AI is facing a host of regulations, both proposed and enacted. Law firms researching AI will need to use legislation and regulation trackers to stay on top of this fast-paced area of law.
For more information on artificial intelligence and the law, check out our AI for Lawyers resource hub.
Top AI Tools to Save Your Firm Time in 2025
Join us to get practical tips and insights on the best AI tools, and a roadmap to transform your practice in this CLE-eligible webinar on December 10, 2024 at 11 a.m. PT | 2 p.m. ET.
Register Now