The Good Bot: Artificial Intelligence, Health Care, and the Law

The Growing Role of State AGs in AI Regulatory & Enforcement Issues

Episode Summary

Brett Mason, Chris Carlson, and Michael Yaghi discuss the growing role of state attorneys general in regulatory and enforcement issues around AI.

Episode Notes

Join Troutman Pepper Locke Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.

In this installment of The Good Bot, Brett is joined by partners Chris Carlson and Michael Yaghi to discuss the growing role of state attorneys general in regulatory and enforcement issues around AI. They discuss the recent first-of-its-kind settlement that gives a first glimpse into what state AGs will be focusing on regarding companies' use of this novel technology.

Episode Transcription

The Good Bot: Artificial Intelligence, Health Care, and the Law — The Growing Role of State AGs in AI Regulatory & Enforcement Issues
Host: Brett Mason
Guests: Chris Carlson and Michael Yaghi
Recorded: October 14, 2024
Air Date: January 21, 2025

Brett Mason:

Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. As a trial lawyer at Troutman Pepper [Locke], my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, as a self-proclaimed tech enthusiast, I'm also deeply fascinated by the role of technology in advancing in the healthcare industry.

Our mission with this podcast, is to equip to you with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector. If you need a basic understanding of what artificial intelligence technology is, and how it’s being integrated into the healthcare, I recommend you start with our first episode of this podcast. In that episode, we lay the groundwork of understanding the technology that is the basis for all of our discussions.

I'm excited to welcome two of my colleagues and partners at Troutman Pepper [Locke] to the podcast here today, Michael Yaghi and Chris Carlson. Both are partners in our firm's state attorney generals practice group, which is within the regulatory investigations, strategy, and enforcement practice group. So, Mike and Chris, welcome to the podcast today.

Chris Carlson:

Thanks for having me.

Michael Yaghi:

Thank you. It's a pleasure to be here.

Brett Mason:

So, I'm excited to have you on because we wanted to talk about the growing role of state attorney generals getting involved in regulatory and enforcement issues around artificial intelligence, including a recent first-of-its-kind settlement that gives the first glimpse into what state attorney generals will be focusing on regarding companies' use of this novel technology.

But before we jump into that, for our listeners who are not as up to date on what state attorney generals do and the actions they take. Let's take a step back. So, Chris, why don't you talk about who are the state attorney generals and what laws do they typically enforce?

Chris Carlson:

Of course, Brett. And again, thanks for having me on. At a high level, I know we're going to discuss what the Texas Attorney General has stated as to be a first-of-its-kind settlement, but this settlement is right in the wheelhouse of state attorneys general. State AGs, at their core, are trying to play a mixed role of both policy, politics, and the law. And within that three-part construct, each state has a consumer protection division. In their consumer protection divisions, they have a specific state law that they are called to enforce, and that they're going to prosecute when evaluating, when companies are being truthful and accurate in their representations to state consumers.

Brett Mason:

Now, are those referred to as UDAP or UDAP laws?

Chris Carlson:

Yes, absolutely. Sometimes they're referred to as UDAPs, which currently is just an acronym, but it means Unfair or Deceptive Acts or Practices.

Brett Mason:

Mike, how do those UDAP laws compared to laws that identify specific statutory violations?

Michael Yaghi:

Yes, it's a great question. The state legislatures gave their state AGs broad authority under these UDAP laws. They generally prohibit unfair and deceptive trade practices, and those standards aren't defined. So, they're very broad, right? It's remedial to protect consumers and constituents in each state. When you see at the federal level or even at the state level, a lot of media coverage or attention from politicians about the need for AI regulations, and laws to govern AI, that may be true. But the states, at least from their perspective, they feel like they don't really need a slew of new laws, right? They're going to use the broad concept of unfair business practices and the broad concept of deceptive business practices when AI is used in any form of an unfair or deceptive practice, or in a deceptive or unfair way.

Then we've seen this in other contexts. You've seen it with opioids, with cryptocurrency, e-cigarettes, data privacy in many different industries as technology or industries expand and grow, and their business models change and their marketing campaigns change. These laws are relied on by the states to really regulate sort of what's taking place in the market. I know Chris is going to talk shortly about the Texas settlement, but that's a perfect example of how states are going to rely on these very broad laws to say, “Hey, company A, if you're doing something that we think is unfair or deceptive with the use of your AI tools, we're going to rely on these statutes to come after you and investigate you, and potentially either sue you or resolve these matters through settlement.”

Brett Mason:

Thanks for that, Mike. Chris, before we talk about the Texas settlement, Mike, would your advice to companies now be that they don't need to wait for specific regulations to be created around the use of artificial intelligence, but that they need to be looking at what the laws currently govern when they're thinking about adopting these types of technologies.

Michael Yaghi:

Yes, absolutely. So, AI is exciting and I agree, it's very exciting, and it's going to create efficiencies and reduce costs for companies, and help companies sort of run their businesses in many different ways, right? Doesn't even have to just be marketing per se, but absolutely, industry companies need to focus on sort of how they're using AI and ensure that there's no unintended sort of unfair or deceptive consequence, right? Because the states aren't going to look at it from an intentional standpoint, that the key with sort of deceptive trade practices from state AGs under their UDAP laws is it's a lower burden, right?

In garden variety, private fraud, you have a lot of elements that a plaintiff has to prove like knowledge, intent, et cetera. Those are not required from a state AG perspective, so it's a much lower burden. My point is, if a company is using AI and they're excited about deploying AI, whether it's through helping with a marketing campaign or in healthcare context, helping with claims coverage issues, or sorting through that massive amounts of data using AI to help with your business operations, you just got to make sure that that's not being deployed in a way that's creating sort of an unfair outcome or a deceptive outcome.

An example of that, I think, for example, the California Attorney General Bonta two years ago sent a notice letter to healthcare CEOs basically saying, “We're looking at how you're going to use AI and whether or not there's going to be some unintended ethnic or racial inequality outcome in whatever decisions you're using AI in your healthcare business.” So, that's an example I think of companies. And again, intend, not necessary, right? It's companies needing

to focus on, okay, how are we going to use AI? Let's make sure that's not opening the door for some state oversight or enforcement that we are not thinking of.

Chris Carlson:

Mike's absolutely right and put a finer point on what he's saying. We were at the National Association of Attorneys General, their Consumer Protection Conference that happened last week, and there was a Massachusetts attorney that essentially said, “If

you're waiting for Massachusetts to put out regulations related to AI, you're too late.”

They said, “Those regulations are called our UDAP statute. And if you think we're waiting around to enforce what they view to be misrepresentations that cause consumer harm because a regulation or a law hasn't passed, you're waiting too late on these options.

Brett Mason:

I appreciate those examples and something that you mentioned, Mike, there about what the California AG was saying, takes me back to a theme that I've seen running through all of these episodes, when we're talking about fairness, bias, discrimination, and the use of artificial intelligence, that is even more important in the healthcare context because we know that any artificial intelligence system is only as good as the data it's relying on. If we're relying on historical healthcare data, we know there's a lot of racial, ethnic, and other types of biases in that data.

So, even more in the healthcare context, where we're looking to an artificial intelligence system to potentially make healthcare decisions for people. It's interesting that this theme is going through all of the episodes and even the state AGs are saying, “Hey, we're looking to make sure that you are paying attention to these potential pitfalls in the systems.”

Michael Yaghi:

Yes, and that's exactly right, because if the underlying data that's driving the AI is problematic and has disparities in it, that we're not unintended, again, don't even need to be intentional. But if the data in an algorithm, for example, is disproportionately denying, let's say, healthcare coverage just protected classes of patients, that's a problem, right? So, those are the types of things companies should really be focused on and making sure that whatever AI tools they use, they're making sure that there's nothing that could either deceptive or unfair.

The unfairness standard, by the way, is so vague and so broad that we see states use it all the time to say, “Well, that's just an unfair business practice, and it tends to be a lower burden.” So, it's just important that companies understand that it doesn't even have to be deceptive. If it just unfairly impacts certain groups, that could be enough to trigger a potential violation and companies want to just avoid that. Even though we would defend against something like that and really vigorously try to prove the state's wrong. It's better to be proactive on the front end when deploying these tools to just avoid the inquiry altogether if you can.

Another thing that I find really interesting in what you're talking about here is this liability or this ability to be held to these statutes could be both the developer of the AI software, if they are not taking into account these biases, these unfairness, but also maybe the deployer. So, whoever buys the software and is using it, it's kind of a multi-layered issue, and that kind of takes us to this Texas AG settlement, which Chris, why don't you tell us about that? What happened there? What company was involved? And what were the different issues at stake there?

Chris Carlson:

Absolutely. But I want to go back to your point about potentially up in the supply chain. The state AG, their UDAP statutes can take action in any way that is in connection with the sale or services. In connection with the sale can go up the supply chain. You see that often in price gouging, where they're going after the manufacturers of eggs, for instance. Not the Walmarts of the world, but the egg sales at the beginning of the point of sale. I definitely expect the same here, where you are just going to say, “Okay, this is the end point of sale or this is the end representation to a business, and that is all we are going to be scrutinizing.”

I think the way you talked about the importance of healthcare and the unique mechanism by which government regulators and private plans are really analyzing AI through the healthcare context is they are viewing it with a level of significance, then a non-healthcare type of data, may not be viewed with the same level of scrutiny. Because as you're saying, there is so many areas of data and how much PII is potentially at issue. That's why it's not surprising that while the Texas AG is referring to this as a first-of-its-kind settlement with a company called Pieces Technology, it involves healthcare. While the issues in this investigation and subsequent settlement probably are not unique to healthcare, it's not surprising that the state of Texas is deciding this is what we want to take our first action.

So, a little groundwork on this settlement with Pieces Technology. It's a health systems company that offers products and services by inpatient healthcare facilities. As part of its portfolio of offerings, it's a startup. Pieces’ offer is a product that uses Generative AI to summarize and chart and draft clinical summaries using the electronic health records. I think we've all thought about this being an advancement in healthcare for a long time. What's AI going to do? Well, it's going to summarize large sets of data, relate to patients, and it's going to spit out something that is more useful than the charts that my wife, the trauma ICU nurse, has been scribbling on every day.

But through the lens of what came out of that, those clinical data and summaries, Pieces Technology made advertisements that asserted that the company's AI generative model had low levels of what they're referring to as critical hallucination. I'm sure you could really explain that even better than I could. Well, I'm not even going to try. But at the end of the day, that was a representation, and the entire Texas settlement is hinging on that representation. Was that truthful accurate?

So, while I'll stop and let you think about the merits of hallucinations, what Texas is focusing on is, are you giving a product that is truthful and accurate in what it says it can accomplish?

Michael Yaghi:

Can I add real quick? I mean, those hallucinations, which a lot of listeners probably know what they are. It's just spitting out some sort of incorrect, misleading or some illogical information, right? Those are the sort of incorrect info that the AI tool or systems generated. So, I just wanted to make sure people understood what that meant.

Brett Mason:

We can understand again in the healthcare context why that is so important. What we're talking about here is documentation in patients' medical records that may be used later by other doctors to look at their clinical history, make a decision about diagnoses, prescriptions, allergies, all sorts of things. So, sitting here thinking about what this tool was developed to be used for by healthcare providers, shows again how important it is to get it right. If that hallucination rate is higher than what the company said it was, a hospital or an in-house facility that may not want to use that tool because they don't want to take that chance that their records are going to have incorrect language, that could then impact a patient's care later on down the road.

So now, that we kind of know, Chris, with you laying that groundwork about what Pieces Technology was saying about its tool and its rate of hallucinations, what did the State AG determine and what did they find to be the violation there?

Chris Carlson:

Before Mike hops onto this. I do want to note that Pieces is in response to the settlement and press release, which made those allegations. Pieces said that the entire settlement and its actions were wholly inconsistent with the Texas press release. I want to note that because while we have this evolving nature and regardless of how the business prohibitions go forward, I don't want to evoke that there was some level of guilt here, especially in a settlement that has a clear non-admission to it.

Brett Mason:

Just to put a little finer point on that for our listeners in case they can go check out the press release if they want to. But what the Texas AG said in the press release was that it found that Pieces Technology’s metrics were likely inaccurate and may have deceived the hospitals about the accuracy and safety of the company's products. So, what you're saying there, Chris, is the company completely disagrees with that. They find that their tool is accurate and that they were not deceiving the hospitals in any way. Is that what you're saying?

Chris Carlson:

I would take that very seriously. Any time a company decides when entering into a resolution to have a counterstatement to the media, I would take it very seriously. We want to make sure we know that.

Brett Mason:

Chris, let's talk about the settlement itself. What were the prohibitions that were included in the settlement agreement?

Chris Carlson:

So, there were two key provisions. The first thing that wouldn't be surprising is the company has to clearly describe its tool's performance, has to do so in a clear and conspicuous manner. The second is a little bit more interesting where it says, “The settlement also requires Pieces to clearly disclose any known or reasonably knowable harmful or potentially harmful uses or misuses of its product or services.” It doesn't really relate as much to the allegations that Texas brought forward, but what we are starting to see is Texas is also enforcing this new statute called the Scope Act, and we very much expect it to have implications on AI. Maybe we'll talk about that on another podcast. We'll just wait a couple months for Texas to have their next enforcement action.

Brett Mason:

So, that's all very helpful for Texas. Thanks, Chris. Mike, what are you seeing state attorney generals doing in other states?

Michael Yaghi:

Yes, I think the Texas settlement is a good example to other jurisdictions of how they could use their broad UDAP authority. To Chris' earlier point, for example, in Massachusetts, the attorney general issued an advisory essentially saying, “We're going to use our UDAP laws to regulate misuses of AI, misrepresentations, unfairness.” So, it's not just Texas, right? Other states are making such announcements. That came out, I think, in the spring of 2024 earlier this year.

Effective February 1, 2026, the Colorado legislature issued a new law on AI, and it's basically requiring AI developers to use reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination. Again, while many states and commentators may say, “You didn't really need a law like that,” because unfairness and deception are vague terms, undefined terms, we make the arguments both ways, right? States will argue that it covers anything and everything. We would argue that it doesn't and there's limitations on the scopes of those broad powers, and Colorado clearly is basically saying to the marketplace, “Hey, we have certain AI expectations as well,” and they're basically saying that this is going to be something that the Colorado Attorney General will have enforcement of powers effective next year after February 1.

So, states are going to continue, I think they're not going to wait clearly for their state legislatures to pass similar laws. They're not going to wait for the federal government. We've seen a lot of media reports over the last couple years with politicians and media covering federal statements or comments about the need for regulating AI and different ways to do so. No regulator is just going to wait again for a new law.

Like Chris noted earlier, at the National Association of Attorneys General, states are making it very clear at these conferences, “We're going to rely on our state UDAP powers to regulate anything we perceive in the marketplace as a misuse of AI,” and they're going to do it, especially I think in the healthcare context we've seen. We've talked about, states are focused on that. They want to make sure that AI is accurate. There aren't any unintended consequences. There's no real room for mistake in the healthcare context, especially when patients are involved and there are medical cares involved. It's important that companies look at how they're using AI.

I know it's exciting and it is exciting. They should use it for improving operational efficiencies and analyzing large volumes of data. They just need to be aware of some of the pitfalls, potential pitfalls that they're not even intending to exist from using these tools. But it is important that companies understand states are going to definitely sort of rely on these broad powers to continue to regulate this area.

Again, we've seen it in many contexts. Like we said, data privacy opioids, AI is the new frontier, technological frontier, and the states are going to be right there side by side with industry deploying these tools to make sure from their perspective, at least that they're done in a fair, effective way that doesn't adversely impact patients and the public.

Brett Mason:

A lot of times we say that the law lags behind the advancement of technology, but I guess maybe you all would agree with me, at least the state attorney general find that the law is already in place for them to use to regulate and enforce actions on this new artificial intelligence technology.

Michael Yaghi:

Definitely. I mean, we would push back, right? We defend against those claims vigorously as we do for our clients, but we're here to let listeners know that that's definitely the position of the states and the attorneys general are taking that position, and they're growingly and increasingly do so.

Chris Carlson:

Yes. Well, I may be the first to wince every time I see a press release, it says first of its kind or novel. You should expect that from state AGs use.

Brett Mason:

Well, thanks so much for joining me, Mike and Chris. I really appreciate the expertise you guys bring to the table in regards to how the state attorney generals are dealing with artificial intelligence. Really enjoyed having you both today.

Michael Yaghi:

Thank you very much. It was great being here, and it was a pleasure to join you.

Chris Carlson:

Thanks, Brett.

Brett Mason:

Thanks also to our listeners. Please don't hesitate to reach out to me at brett.mason@troutman.com, with any questions, comments, or topic suggestions. You can also subscribe and listen to other Troutman Pepper [Locke] podcasts wherever you listen to your podcast, including Apple, Google, and Spotify.

Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.