Legal Process Outsourcing
Proactive Governance: Managing and Monitoring the Risks of Artificial Intelligence
September 23, 2024: Extract from Justice Manual of the US Department of Justice (DOJ) revising its guidelines for “Evaluation of Corporate Compliance Programs” (ECCP)
In today’s world of technology, a rapidly advancing phenomenon is Artificial Intelligence (AI). We describe it to be a momentous shift in the way we operate, only this time with the use of Gen AI. It has also been interesting to see the legal industry take a step forward to adopting the intelligence that an AI could generate.
However, there is an imperative need to control AI and curtail the risks it poses. There has been a deepening need to regulate the use of AI responsibly while also bearing in mind the potential risks and challenges due to its extensive adoption across the globe. The input of a piece of information to an AI for training and the resultant output: Both are difficult to use without manual oversight as an AI will generate an approximation if it does not know an answer. Also, some of the ongoing risks in the areas of security, privacy, ethics, and governance continue to haunt the usage of AI.
It is, hence, critical for AI to operate within the four corners of a regulatory framework and defined practices for driving transparent and accurate decisions.
In continuation to DOJ’s earlier guidance (2023) on the ECCP, it essentially specified what “…federal prosecutors should keep in mind while investigating any corporation prior to bringing any charges…”and were designed to ensure that the corporates make significant improvements and continuing enhancements to their compliance programs.
Again, onSeptember 23, 2024, the Criminal Division of the DOJ revised the paper on the ECCP guidelines to include potential risks with the use of AI. The ECCP paper now details how companies should take proactive steps and be prepared to continuously assess probable risks and resultant impact using newer technologies including AI. The paper further necessitates the need for a robust approach to governance to regulate the use of these technologies and included it in its compliance program. The revisions to the ECCP focus on critical areas including managing the potential risks of AI and its integration with other emerging technologies; protecting whistleblowers and reviewing the company’s processes for managing any complaints; and ensuring governance and compliance.
AI has been a focus for DOJ; Deputy Attorney General Lisa Monaco described AI as “a double-edged sword” and announced that federal prosecutors will seek harsher sentences for crimes “made significantly more dangerous” due to the use of AI.
The updated ECCP revisions are likely to impact the way companies structure their compliance programs and ensure they take reasonable and appropriate steps to enhance these programs. The focus, however, is on mitigating any probable and continuing risks associated with AI.
Several guiding questions that companies might have to keep in mind while deploying any AI solutions for their commercial business operations considering the September revisions to the ECCP include:
- Do companies have robust processes to monitor and manage AI-induced risks?
- Do any of the probable risks make companies non-compliant with criminal laws?
- Is the AI risk management process integrated with the company’s broader enterprise risk management (ERM) strategies?
- Do companies have robust governance strategies to detect, mitigate, and prevent any adverse consequences due to the use of AI?
- Do companies regulate the use of AI to be compliant with its code of conduct and applicable laws?
- Do companies have defined approaches to track, monitor, and prevent intentional abuse of AI, including insiders?
- Do companies have defined controls to regulate the use of AI for intended purposes?
- Do companies take appropriate steps to ensure accountability for the use of AI?
- Do companies have processes to train employees on the use of AI?
In a nutshell, the revised ECCP allows prosecutors to evaluate companies’ efforts to track, monitor, and prevent probable risks due to the use of AI to ensure compliance and promote responsible use of AI systems. The revisions to the ECCP also allow a company to take corrective steps to enhance its compliance program and actively assess any potential misconduct or shortcomings.
Also, the NIST (National Institute of Standards and Technology) has created an AI Risk Management Framework (AI RMF) that provides a direction to identify risks and guidance to promote the responsible use of AI.
The basis of a compliance framework involving the use of any AI, or other newer technologies, is to have multiple measures and controls for evaluation, right from the inception stages including design to development, deployment, and continuous monitoring and mitigation of emerging risks. The revisions to the ECCP also imply its expectation that companies proactively take steps to monitor and enhance their compliance programs, policies, and procedures.
We will have to wait to see what’s more to come…
References
- https://www.justice.gov/criminal/criminal-fraud/page/file/937501/dl
- National Institute of Standards and Technology (NIST) is one of the nation's oldest physical science laboratories and part of the U.S. Department of Commerce. [Refer ‘National Artificial Intelligence Initiative Act of 2020 (P.L.116-283).’]
- https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF
- https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook