The reliance of technology on contract management is rapidly evolving with AI-led digital interventions increasing efficiency, curtailing costs and driving accuracy in comparison to the traditional era. Having said that, my view is contingent on the risk factors that might continue to influence the contract authoring, negotiation between parties, and the evolution of technology to be able to scale faster in today’s world. Of course, we are experiencing the growing potential of digital interventions, but it’s also an imperative to avoid circumventing any regulatory safeguards controlling the use of them.
Having said that, there has been a definite impact by AI automation in enabling transparency, not without lean-specialized contracting professionals to mitigate the associated risks. Thus, on one hand AI presents continuous skepticism while on the other, it promises to be a game-changer.
Let’s do a reality check.
It is critical to capitalize on the AI revolution and focus on a sincere adoption, aligned with the “Responsible Use of AI” moto and ethical policy guidelines. It is a known fact that by training AI to do a set of legal activities within contracting leads us to the path to unparalleled efficiency and cost overturn.
The contracting lifecycle is complex across industries and many of the contracts written are highly bespoke with ridiculous lead times. The game-changer here is the use of AI, which auto-accelerates critical phases within the lifecycle, such as:
- Auto-creation of contract summaries (with the most updated information from amendments)
- Auto-gap analysis and risk assessment
- Auto-detecting material deviations/redlines and objective risk scoring
- Auto-routing to playbook fall back clauses while negotiating
- Auto-triaging of requests with a fast turnaround
- Auto-reading tables and information from scanned copies of contracts (I call it the reverse OCR)
- Auto-linking of contracts as per the parent-child hierarchy
- Auto-directing for approvals at pre-execution
- Auto-capture of contractual obligations and more
These features eliminate manual errors, improve accuracy, reduce time, and save considerable amount of money. Also, there are “art of possible” initiatives in exploratory phases within the scope of AI. And so, it is important to understand and consider the global regulations affecting it, so compliance is not overlooked.
Starting with the September 23, 2024, the Criminal Division of the DOJ revising the Evaluation of Corporate Compliance Programs (ECCP) guidelines to include potential risks with the use of AI (according to Justice.Gov), directingcompanies to take proactive steps and be prepared to continuously assess probable risks and resultant impact using newer technologies including AI.
Globally, countries are enacting regulations bound by ethical principles of transparency, accountability and fairness for the use of AI in contracts and the various risks it poses.
It’s evident in most of the AI related regulations across the globe that there appears to be a unified approach to advancement but a varied approach to regulate the use of it.
There are some interesting guidelines, such as:
- Rule 5.3 of the American Bar Association (ABA) Model Rules of Professional Conduct relating to the “Responsibilities Regarding Nonlawyer Assistance” adds specific onus on a lawyer’s ethical obligations to expand the use of “non-human assistance”, including the use of legal technologies, which includes AI. So, lawyers are responsible for not only diligently supervising the use of AI in legal practice (including validating the accuracy, dependability) but also being adequately trained to use AI and compliant to obligations under this rule.
- The Ethics of Artificial Intelligence by the UNESCO, an international guideline, highlights the significance of human beings overseeing AI systems and the need for ethical guardrails to preserve human rights, inclusivity, and environment.
There are a few mentions of global regulations designed by adopting key attributes of the responsible AI framework such as data privacy, systemic bias, fairness, transparency, and accountability:
The European Union (EU)
The EU enacted and adopted a comprehensive AI law in June 2024.
The EU's Artificial Intelligence Act (AI Act) was a first comprehensive legislation that categorized AI systems by risk levels, from minimal to high-risk. It stresses on key ethical principles of transparency, accountability, and fairness. Contract management tools usually fall within the "high-risk" category due to their significance in regulating the contract lifecycle, maintaining the integrity of contracts, and managing risk. Additionally, the EU has introduced model contractual clauses to ensure AI systems are trustworthy, fair, and secure. These key clauses enable organizations to procure and deploy AI responsibly, emphasizing data protection, non-discrimination, and compliance with fundamental rights.
United States
The U.S. lacks a unified AI law but regulates AI through sector-specific regulatory guidelines. In addition, agencies like the Federal Trade Commission (FTC) focus on transparency and bias prevention. In the United States, ethical principles for AI focus on responsibility, equity, and transparency. The Department of Defense (DoD) has outlined five key principles: Responsible use, minimum bias, traceability, reliability, and governability. These principles ensure that AI systems in contract management are developed and deployed with care, fairness, and accountability. The U.S. also emphasizes the importance of mitigating risks such as data breaches and unintended biases. Organizations are encouraged to conduct regular audits and maintain clear documentation of AI processes to uphold ethical standards.
China
China is rapidly developing its AI legal framework, addressing issues like copyright ownership of AI-generated content and data privacy. The country aims to establish a comprehensive AI governance system by 2030, focusing on responsible AI development, data security, and ethical considerations.
Global trends: Many countries adopt a risk-based approach, categorizing AI applications by societal impact. Common themes include preventing bias, protecting privacy, and ensuring accountability.
Bridging the Gap
For multinational organizations, aligning with AI regulations is crucial. The use of AI in law remains alarmingly opaque: The tools provide no systematic access, publish few details about their models, and report no evaluation results at all. This opacity makes it exceedingly challenging for lawyers to procure and use AI products. The lack of transparency also threatens lawyers’ ability to comply with ethical and professional responsibility requirements. Without access to evaluations of specific tools and transparency around their design, lawyers may find it impossible to comply with these responsibilities. Alternatively, given the high rate of hallucinations, lawyers may find themselves having to verify every proposition and citation provided by these tools, defeating the stated efficiency gains that legal AI tools are supposed to provide.
We will have to wait and see how effectively AI regulations will monitor and govern its use as undoubtedly AI holds the key to transforming the future of legal services.