The Good Bot: Artificial Intelligence, Health Care, and the Law

DOJ Addresses AI in Corporate Compliance Programs

Episode Summary

Brett Mason, Michael Lowe, Callan Stein, and Nicole Giffin discuss the Department of Justice's Evaluation of Corporate Compliance Programs, with a focus on recent updates related to AI.

Episode Notes

Join Troutman Pepper Locke Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.

In this episode of The Good Bot, Brett Mason and attorneys Michael S. Lowe, Callan G. Stein, and Nicole S. Giffin discuss the Department of Justice's (DOJ) Evaluation of Corporate Compliance Programs, with a focus on recent updates related to AI. They highlight the DOJ's criteria for assessing compliance programs, including design, implementation, and results. Additionally, they emphasize the importance of risk management and human oversight in AI applications, and provide insights on the new administration's stance on AI and its potential impact on corporate compliance strategies.

Episode Transcription

The Good Bot: Artificial Intelligence, Health Care, and the Law — DOJ Addresses AI in Corporate Compliance Programs
Host: Brett Mason
Guests: Mike Lowe, Cal Stein, and Nicole Giffin
Recorded: February 24, 2025
Aired: March 25, 2025

Brett Mason:

Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. As a Trial Lawyer at Troutman Pepper Locke, my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, as a self-proclaimed tech enthusiast, I'm also deeply fascinated by the role of technology in advancing in the healthcare industry.

Our mission with this podcast, is to equip to you with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector. If you need a basic understanding of what artificial intelligence technology is, and how it’s being integrated into the healthcare, I highly recommend you start with our first episode of season one. In that episode, we lay the groundwork for understanding the technology that is the basis for all of our discussions.

I'm really excited to welcome to the podcast today several of my colleagues from Troutman Pepper Locke, including Mike Lowe, Cal Stein, and Nicole Giffin. Mike, if you could kick us off by introducing yourself. Thanks for being here.

Mike Lowe:

Thanks a lot, Brett, for having me. I'm a partner at Troutman Pepper Locke in the Litigation Practice Group. I was a federal prosecutor for 25 years before I joined the partnership here. While I was a federal prosecutor, among other things, I investigated corporate fraud. So, I've got a familiarity with that. My practice at Troutman Pepper Locke includes white-collar criminal defense, internal investigations, NIL, and I'm also part of our Health Care + Life Sciences Litigation Practice Group. Once again, thanks for having me, and I'm looking forward to it.

Brett Mason:

Thanks, Mike. Cal, why don't you introduce yourself to our listeners?

Cal Stein:

Hi, everyone. Hi, Brett. Thank you for having me as well. My practice is very similar to Mike's. We work together on a lot of different things, including investigations, both internal and government. But we also work a lot with our clients proactively and preemptively on their compliance programs to help them avoid the type of government investigation and enforcement action that can really be disruptive. So, today's topic is near and dear to my heart and I'm really happy to be here.

Brett Mason:

Wonderful. And Nicole, finish us up here.

Nicole Giffin:

Thanks, Brett. I'm Nicole Giffin. I'm an associate in Troutman Pepper Locke's White-Collar Litigation and Investigations Practice Group. I have a number of years of experience working with corporate and individual clients in white-collar matters, and also, assist with regulatory compliance matters to help avoid investigations.

Brett Mason:

Well, thanks so much to all three of you. As the listeners can probably guess, we are talking today in an area that touches on white-collar and corporate compliance. Specifically, today, we're going to talk about the Department of Justice's Evaluation of Corporate Compliance Programs and the recent updates focusing on artificial intelligence. Before we jump into that, Mike, can you kick us off by telling us about the DOJ's Evaluation of Corporate Compliance Programs in general.

Mike Lowe:

Brett, DOJ has a document posted on their website that I think everyone in this space should be familiar with. It's titled Evaluation of Corporate Compliance Programs, and it gets updated periodically. In fact, the last update was in September of 2024. It effectively serves as a roadmap for federal prosecutors to evaluate the effectiveness of corporate compliance programs when they consider whether or not to bring charges or resolve charges against a business organizational corporation.

Essentially, when federal prosecutors are considering whether they're going to charge a company, the Justice Manual, which for those of us who were longtime DOJ employees, it used to be known as the United States Attorney's Manual. Anyway, the Justice Manual instructs them to consider and evaluate a host of factors. One of those is the effectiveness of a corporate compliance program. The document I mentioned earlier, the DOJ's Evaluation of Corporate Compliance Programs, I would say it's really the most detailed guidance documents that there is that prosecutors will pretty much exclusively use to evaluate and make informed decisions about the effectiveness of a corporate compliance program.

Now importantly, when prosecutors do evaluate a corporate compliance program, there are three fundamental questions that DOJ wants those prosecutors to ask. First, they want them to look at design. Is the corporate compliance program well designed? Second, they want them to look at implementation. Is the corporate compliance program being applied earnestly and in good faith? Another way to put that, is the compliance program adequately resourced? Are you putting enough money as a corporation into this program? And is it really empowered to function effectively? Lastly, results. Does the corporate compliance program actually work in practice?

I think it's fair to say that the DOJ's Evaluation of Corporate Compliance Programs, it's really anchored by those three fundamental questions I just mentioned. Within each of them, there's guidance that gives prosecutors what I think we call the hallmarks of what DOJ considers to be part of an effective corporate compliance program. And it gives them questions to ask when they evaluate that corporate compliance program.

Final point I'm going to make on this, Brett. Prosecutors really consider corporate compliance programs at two points in time, two points in time. First, the time of the offense, and second, the time of the charging decision or resolution.

Cal Stein:

Mike, these are really things that we deal with on a regular basis with our corporate clients, both during investigations and as I alluded to before, even in advance. The three fundamental questions that you just described, I mean, those have been around for some time, even as this document has been revised by the Department of Justice over, and over, and over again.

To me, I always look at it as the second and third of those questions, implementation and results. Those being kind of the most important ones in practice. Not to say design isn't important, but the first question about design really focuses on what is on the page, what has the corporation put down in writing in its corporate compliance program documentation. Whereas, those latter two questions about implementation and results, those focus on what's actually happening. What the corporation is actually doing in terms of compliance.

In my experience, the answers to those questions really answers the biggest question of all that the Department of Justice is going to have, which is, look, is this corporate compliance program just a "paper program" or is it something that the corporation actually follows and actually utilizes to ferret out compliance violations? Because that's what the Department of Justice really wants to see.

Mike Lowe:

Yes, Cal, I think that's a fair way to describe it. I mean, the reality is what they're looking for is exactly what you said. Is this legit? Are you taking this seriously? Or is this just something to protect you when DOJ comes calling later on?

Brett Mason:

I appreciate those points, especially what you said, Cal, there, because we know there's a well-known phrase, "Best laid plans." Just because you design something really well does not mean it's going to be effective, which is why the implementation and results are so important.

Let's talk now about the update that Mike mentioned earlier from 2024. Does the DOJ consider the use of artificial intelligence in its evaluation of corporate compliance programs? Nicole, why don't you let us know what you think on that?

Nicole Giffin:

Thanks, Brett. Yes, it's really interesting. In 2024, we saw an increase in the DOJ's focused on AI across the board. So, in February 2024, Deputy Attorney General Lisa O. Monaco stated that the DOJ is going to be focused on AI, including the impacts and the risks that AI imposed. She said it again in March of 2024. She also foreshadowed that the DOJ will have robust enforcement actions related to AI, and will seek stiffer sentences for offenses that were made more dangerous by the use of AI.

In February of 2024, she announced the formation of the Justice AI, and that's to help the DOJ better understand and prepare for how the AI will affect its mission and accelerate its potential for good while guarding against risks that are associated with AI as well.

Cal Stein:

I want to jump in and talk about something that Deputy Attorney General Monaco actually said in those comments that you just referenced. I was really struck by a couple of lines. The first one that really got me, she said this, she says, "Every new technology is a double-edged sword, but AI may be the sharpest blade yet." Boy, to me, that comment really, really crystallized what I think is the great unknown about AI, both in terms of the benefits that it can provide, which could be substantial. But also, really, the serious risks that it presents, which, of course, underscores the need for strict compliance and a robust compliance program for corporations. I think, the mere fact that Deputy AG Monaco called AI the sharpest blade yet really, really highlights, at least for me, the focus that DOJ is going to have on AI now and going forward.

Nicole Giffin:

Yes, and to your point, Cal, so in September of 2024, the DOJ did update its Evaluation of Corporate Compliance Programs guidance document that Mike explained earlier. For the first time, the DOJ included considerations regarding a corporation's use of AI, both in its compliance program and in business lines.

Brett Mason:

Following up on that, Mike, what did the DOJ say about the use of artificial intelligence and its updates to the Evaluation of Corporate Compliance Programs last fall?

Mike Lowe:

The September 2024 updates with respect to AI focus on how the company assesses the risks that are associated with the use of AI, as well as how the company manages and monitors the risks. The prosecutors are instructed that they're supposed to consider how the company actually contemplates the risks, their risk management, and their oversight of AI in both the company's business lines and in their compliance program. So, think about that. DOJ wants prosecutors to look at how companies are contemplating the risk of AI in their business line, not just in their compliance program.

The updates also say that prosecutors should consider how the company assesses and manages those risks from AI from the internal and external perspectives. So, for example, the guidance instructs prosecutors to ask questions like, "How is the company curbing any potential negative or unintended consequences that result from the use of technology, both in the commercial business and in the compliance program?" They want the prosecutors to ask, "How is the company mitigating the potential for deliberate and reckless misuse of technology, including by company insiders?" They want prosecutors to ask, "To what extent is the company using AI in its business, or as part of its compliance programs? Are controls in place to monitor the trustworthiness, the reliability?"

They want prosecutors to consider whether controls exist to ensure that technology is used only for the intended purpose. Another important question, what baseline of human decision-making is used to assess the AI and how is accountability over the use of AI monitored and enforced? One final question I'll mention that is supposed to be considered is, how does the company train its employees on the use of emerging technologies such as AI?

Cal Stein:

Mike, when you go through all of those questions, there are really two kind of keys that jumped out at me at least. First, is the focus on prevention by mitigating risk. Again, I think this goes back to the great unknown about AI and that quote that I gave a few minutes ago by Deputy AG Monaco. I mean, clearly the Department of Justice is concerned about AI and the dangers it poses, even if it's not able right now to articulate fully what those concerns are, or maybe they don't even really realize what all of those concerns are.

The second thing that jumped out at me is at least a significant focus on human inputs with respect to AI. This is something we've seen elsewhere with AI, with our clients in the healthcare space and even in our own profession, even in the legal profession. The concern that as AI develops, it is going to be doing more and more work, and corporations are going to need to take active steps to ensure that human beings are still the ones making certain decisions.

I'll give an example. I mentioned in the healthcare space, we are starting to see this where AI can effectively read medical tests or x-rays or things like that, but medical professionals, i.e., human beings, are still the ones that have to do the diagnoses. They can take and utilize the work that AI is doing, but a human being must still use his or her expertise and knowledge to render an ultimate decision. I think that is one of the big themes from this guidance coming out of DOJ.

Brett Mason:

Yes, Cal, I agree with that. It's very similar to what we see in the Colorado AI Act, which is kind of being held up as the sort of preeminent state-based act at this point. And anytime there's a high-risk use of artificial intelligence, human involvement is required. I think that is especially important in the healthcare space.

I will note, though, there was a peer-reviewed study done last year where they had a group of doctors determining a diagnosis without AI. They had a group of doctors using AI, and then they had just the AI. Guess which group was more accurate? Just the AI. It was more accurate than the group of doctors by themselves, and more accurate than the group of doctors with AI. So, I understand and appreciate the risks and concerns, and I agree we should definitely still have humans involved in important decision-making, but we need to realize that in some instances, AI may be more accurate than human decision-making, and we need to find the balance between trusting AI and the accuracy of it and also protecting and making sure we're not relying too much on it. I think that balances something DOJ is playing with and thinking about in these compliance programs as well.

Cal Stein:

Yes, I mean, I think that's right, Brett. One thing we haven't really talked about yet, but I'll mention here is the concept of updating a compliance program and changing it and revising it based on your experiences with it as a corporation. I think that what you just described is one of the places where companies should be looking and where I would suspect DOJ will be looking to kind of find that balance between what do human beings need to be doing, what should they be doing, what do we want them to be doing, but also what works from an AI perspective. I mean, that is always something we talk about with clients about what DOJ wants to see in a compliance program. Are you updating it? Are you changing it? Are you filling in gaps so that it actually works better in practice? And I think that's one area where you could see that going forward.

Brett Mason:

Yes, I agree, I appreciate that. That takes me to my next question that I had for you guys, is what should companies be thinking about with respect to their corporate compliance program if they're using artificial intelligence or they're thinking about using artificial intelligence?

Nicole Griffin:

Yes. If companies are using AI already or they're contemplating using AI at all, they need to start thinking now about how they will integrate that into their compliance programs. So, going back to the questions that Mike mentioned at the beginning about how the DOJ evaluates corporate compliance programs and the three fundamental questions, the first question is design. Is the corporate compliance program designed well? Really, companies need to assess the risks that are associated with the use of AI, if they're using it in their business or their compliance program, and they need to assess those risks from the external and the internal perspective.

Once they do that, they need to integrate those risks into the broader enterprise risk management strategies and compliance program, policies, and procedures and other things to ensure that they are designing a well-functioning compliance program, including addressing the use of the risks associated with AI.

Mike Lowe:

I'll add, we know now that DOJ is looking at how companies deal with AI as they decide whether or not to charge a company and whether or not to resolve an investigation in a favorable way for the company. So, as Cal pointed out, DOJ doesn't really yet know what all that means, because AI is evolving, it's a new space. But I think it's important for a company in getting out in front of this, to make sure they implement some effective ways to curb the potential unintended consequences from the use of a new technology like AI, as well as develop a strategy to mitigate against the potential deliberate or reckless misuse of AI, including by company insiders.

Cal Stein:

Yes. I mean, the takeaways that I'll kind of focus on are consistent with all the things we've been saying thus far, maintaining human oversight of AI, however you're using it, and of AI decision making, correcting decisions made by AI that are inconsistent with the company's values or with applicable law to the points that Mike was just making. And then also, as with any new technology or new feature of your compliance program, you have to train your employees on what they can be doing with AI, what they can't be doing with AI, as well as whatever company policies or codes of conduct you have related to AI.

Brett Mason:

I appreciate these takeaways, guys. Again, from doing these episodes on various different topics, you see similar themes about the use of artificial intelligence across industries. So, what DOJ is saying in their evaluation for the corporate compliance programs is really not that different from what we're seeing across the board. At the end of the day, having controls in place to monitor and test to ensure that whatever artificial intelligence technology is being used, is being used for the intended purpose, and that technology itself is trustworthy, reliable, and complies with the company's policies, procedures, and code of conduct. Those are very similar themes we're seeing throughout industries as artificial intelligence is being integrated in.

So, let's talk about, now we're here in 2025, we're in the new year, and we're not only in a new year, we're in a new administration. Is there anything else companies should be thinking about regarding their corporate compliance programs if they use AI, or again are thinking about using AI?

Mike Lowe:

Well, Brett, what I'll say is it's going to be interesting to see what happens in the coming months and years. The new administration, the Trump administration has made clear that they treat AI as a priority. And I think there's potentially a different approach to AI from this administration to the last. By way of example, within a few days of his inauguration on January 23 of 2025, President Trump issued an executive order titled, “Removing Barriers to American Leadership in Artificial Intelligence.” When you read that executive order, you see that it has the stated purpose of sustaining and enhancing America's global AI dominance, “To promote human flourishing, economic competitiveness, and national security.”

So, what I'll say is it remains to be seen how, if at all, this will affect DOJ's focus on AI and as relevant to the Criminal Division's Evaluation of Corporate Compliance Program, whether AI risk and risk management will remain a focus in evaluating compliance programs.

Cal Stein:

I agree with that, Mike. I mean, we've already seen some stark differences between the new administration and the outgoing administration, and it wouldn't be surprising to see some changes here as well, including some policies that are consistent with the executive order you just mentioned. But at least for the time being, at least as we sit here right now, the current version of the Evaluation of Corporate Compliance Program document does contemplate corporations assessing, managing, and monitoring their AI risk. Let's not forget those comments by Deputy AG Monaco signaling robust enforcement and strict penalties around the misuse of AI.

Let me go back to those comments exactly because there were some pretty strong words by Deputy AG Monaco. Even though that was the old administration, I think it's worth pointing them out. This is one of the things she said. She said, “If we determine that existing sentencing enhancements don't adequately address the harms caused by misuse of AI, we will seek reforms to those enhancements to close the gap.” Again, I'll say it again, that was the old administration. There's a new one here, but those are still really strong words. And at least as of right now, that is all baked into what we see in this corporate compliance program guidance.

Nicole Griffin:

Yes, and I think it's yet to be seen what will happen with the new administration and how they consider the use of AI. But as we move towards the continued use of AI and even more use of AI in businesses and compliance programs, companies really need to consider maintaining robust compliance programs around the use of AI.

Brett Mason:

Well, I definitely will be interested to stay up to date on any changes, especially as this new administration settles in and they make their priorities with DOJ very clear. So, appreciate all three of you being on and look forward to having you again to talk with us about any updates.

Mike Lowe:

Thanks.

Cal Stein:

We'd be happy to come back, Brett.

Brett Mason:

Thanks to our listeners. Please don't hesitate to reach out to me at brett.mason@troutman.com. If you have questions, comments, or any topic suggestions, you can also subscribe and listen to other Troutman Pepper Locke podcasts wherever you listen to podcasts including Apple, Google, and Spotify. Thanks so much, everyone.

Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.