The Good Bot: Artificial Intelligence, Health Care, and the Law

AI in the Operating Room: Liability Issues for Device Makers

Episode Summary

Brett Mason, Eric Rumanek, and Frederick King discuss the incorporation of AI software into medical devices used during surgical procedures.

Episode Notes

Join Troutman Pepper Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.

In this installment, Brett Mason is joined by Partner Eric Rumanek and Associate Frederick King to discuss the incorporation of AI software into medical devices used during surgical procedures. As health care providers actively lobby for shifting liability from themselves to medical products with AI software, several potential liability issues may arise for the manufacturers of these products. This episode examines potential product liability issues that such medical devices may face and design considerations that medical device manufacturers should consider when incorporating AI software into their products.

Episode Transcription

The Good Bot: Artificial Intelligence, Health Care, and the Law — 
AI in the Operating Room: Liability Issues for Device Makers
Host: Brett Mason
Guests: Eric Rumanek and Frederick King
Recorded: 5/1/24

Brett Mason:

Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. I’m a trial attorney at Troutman Pepper, so my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, I am a tech nerd, as my colleagues know. So, I'm also deeply fascinated by the role of technology in advancing in the healthcare industry.

Our mission with this podcast, is to equip to you, the listener, with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector. If you need a basic understanding of what artificial intelligence technology is, and how we're seeing it integrated into the healthcare industry, currently, I highly recommend you start out by listening to the very first episode of this podcast. We were lucky enough to have Morgan Hague, from Meditology Services on, to help us understand some of the tech that's being used and where the tech may go. In that episode, we really lay the groundwork for understanding the artificial intelligence technology.

I'm really excited for this episode, because I get to be joined by two of my colleagues that I work with regularly here at Troutman. On the pod today, we have Eric Rumanek, who is a partner at Troutman Pepper in the healthcare litigation space. He is a well-known trial attorney who has worked on several trials with me in the past few years for healthcare clients, and specifically, medical device manufacturers.

So, Eric, welcome, really excited to have you.

Eric Rumanek:

Thanks. Excited to be here.

Brett Mason:

We also are joined by our colleague, Frederick King, who is an associate in our health sciences litigation group as well. Frederick has had the joy or maybe the stress of going to trial with Eric and I, and the topic we're going to be talking about today is somewhat associated with a trial that we worked on in the past year. So, Frederick, welcome. Glad to have you.

Frederick King:

Yes. Happy to talk about this with you.

Brett Mason:

So, I've roped Eric and Frederick into my fascination with artificial intelligence. Frederick, we're seeing a lot of discussion in the news, especially in the past year, because of the advent of generative artificial intelligence in several industries, the legal industry, the automotive industry, manufacturing. So, what are some of the ways that we've started to see artificial intelligence is being incorporated into the medical device industry so far?

Frederick King:

Sure. So, AI, despite its newness has really taken off in a couple of industries so far. You'll see this often, especially with the automotive industry, with driver’s assistance tools. AI works to analyze the driver’s surrounding and helps them avoid collisions or go properly within the flow of traffic.

In the medical device industry, we're still in the early stages. So, AI is playing more of an information-gathering role right now. One of the main ways AI is being implemented is training medical students or new physicians on certain procedures. AI is great at obtaining hundreds or thousands of copies of procedures as they are being carried out, and it's able to distill all of this data down into models, so to speak, and help new physicians or students in understanding what's the best way and what's the safest and most efficient way to go through a procedure. This really helps these students and physicians in carrying out these procedures in a safe and efficient way, no matter how big or small of hospital they work in.

Brett Mason:

I agree. I think that's one of the interesting ways we're seeing artificial intelligence being used. Another thing that we're seeing that's very similar to what you're talking about when it comes to information gathering and pulling together conclusions from information are the diagnostic medical imaging tools. Right now, of the FDA-approved medical devices that have artificial intelligence software, a vast majority of those are radiology tools or medical diagnostic tools that are doing several things.

What we wanted to talk about today is the incorporation of artificial intelligence software into medical devices that are used during surgeries. And Eric, we started talking about this because of some experiences we've been having. So, could you explain why we wanted to look into this issue and start thinking about the different legal implications that may arise with medical devices using artificial intelligence software in surgical procedures?

Eric Rumanek:

Absolutely. So, obviously, we had a trial where a medical device manufacturer’s employee was present during a surgical procedure. We've seen where technology improvements have really driven a lot of surgical innovations. In many instances, the employee of the manufacturer or the representative of the manufacturer is there during the surgery, monitoring the technology.

In this trial, you had a healthcare team that was doing the surgery, performing the nursing functions and things like that. But you had a manufacturer's representative who was there monitoring technology. Then, when things are missed during a procedure or when red flags end up being seen, then it becomes a question of, “Okay, who's responsible for that? Is it the healthcare team that's overall responsible for the patient safety, or the manufacturer's representative who's there overseeing technology?” As you have technology growing, you have monitoring, and warning growing, obviously, AI is going to step right in the middle of that space. We've seen it in a trial, not with the AI. But you can see that's the direction that it's headed.

Brett Mason:

I think not only are we thinking about it from a medical device manufacturer perspective, but there is plenty of information and discussions that are occurring with the American Medical Association and healthcare providers. They are looking at what is going to happen once they start using these types of tools and who is going to be liable if the tool isn't working the way that it's supposed to? We're already seeing information that the AMA is lobbying with Congress to protect healthcare providers if they're using tools that incorporate artificial intelligence software.

So, Frederick, can you talk to us about some of these tools, how they work? What are they providing? What is the artificial intelligence software actually doing in the surgical tools that may come to surgeries soon?

Frederick King:

Sure. So, as Eric mentioned, you have sometimes dozens of different streams of data that the surgical team is looking at during a procedure. You have the patient's vital signs. You have different parts of the procedure that the surgical team needs to look at, and also, pay attention to the tools that they're using. AI starts enveloping itself in these procedures by analyzing these different streams of data and distilling it down to walking the surgical team through the procedure from step A to step Z, making sure they're not missing anything in the middle, and really giving them a heads up in case things start to go sideways.

Brett Mason:

I think what's so interesting about that, we hear a lot about these large language models and machine learning and all of this, what we're talking about with these devices, these devices are going to be analyzing surgical data from the patient in real-time, and then making recommendations or giving warnings to the healthcare team, based on the data that's occurring during the procedure.

That is fascinating. We know how much data these devices can collect from any given patient depending on what the procedure is. But when that happens, when we are now shifting, analyzing patient data to a device, and then expecting that device to make recommendations for the next steps in the surgery, or provide warnings. Now, we have liability moving to the device away from the health care providers. So, what are some of those liability considerations, Eric, that may be arising when we're seeing that shift in these medical devices?

Eric Rumanek:

Yes. Brett, I really think it's kind of a slippery slope. From my perspective, I think these are very well-intentioned ideas that companies are trying to implement. A phrase that we hear tossed around a lot is it's kind of designed to be an extra layer of protection. So, the idea is that you're not replacing the healthcare team, you're just an extra layer of protection there in case something gets missed, but that just becomes a really slippery slope when you're dealing with responsibility. That extra layer of protection can become a claim that while we were counting on it to do this, we rely on it to do this, and that's where, well then, who's responsible if that gets missed?

Even if you're not fully responsible from a manufacturer perspective, are you partially responsible? We see all the time and lawsuits where a manufacturer may be named as a defendant and have to incur the expenses of litigation, just because they're that extra layer of protection. So, it really can become a slippery slope when you get into warning and monitoring issues.

Brett Mason:

Now, Frederick, I want you to give us a little bit of background. What are some failure to warn type claims that a manufacturer may face if they do have a medical device where they've incorporated artificial intelligence software designed to help provide warnings or provide recommendations to a healthcare provider during a surgery?

Frederick King:

So, failure to warn is really focused on what the manufacturer is telling the physician about what the AI's limitations are or what it can do during the procedure. It really focuses on how the medical device manufacturer informed the physician or the surgeon about what to be aware of in terms of risks or limitations. In AI, or in the context of AI’s use with medical devices, I think that really shows up with understanding what AI can do and what it can't do. AI is really a fascinating tool, as you said, Brett, but it can't generate intelligence right now. So, it's important for medical device manufacturers to lay out what AI can do and what it can't do during the procedure for the surgical team.

Eric Rumanek:

Brett, let me just chime in real fast, in terms of failure to warn, and we've seen this, that there can be kind of affirmative warnings alerting the surgical team to certain information. But then there's also kind of the absence of warnings. So, if a surgical team isn't properly trained, and they don't understand some of the data that may be out there, you can also have a claim that well, you should have told me that we need to be looking for this. You should have made this more prominently featured in what I'm seeing and visualizing. So, there's kind of the affirmative warnings, but also, there's the, “You didn't tell us we need to be looking for this type failure to warn.”

Brett Mason:

I really appreciate that point, Eric. Again, it goes hand in hand with what we talked about before that the technology is becoming so advanced. We're having medical devices that are complicated and difficult to use in some aspects and fully understand the capabilities of the medical devices. So, if the AI software is going to be incorporated into these types of devices, thinking proactively about how healthcare providers are going to be trained and taught how to use the devices, and what materials are going to be available to teach healthcare providers about the limitations and functionality of the artificial intelligence software is going to be key to help protect medical device manufacturers from this type of liability.

Another aspect that we see a lot in the cases that we handle for medical devices are claims that are centered around design defect. So, Frederick, can you give us a little bit of background, what is a design defect claim? And what are some of the things that medical device manufacturers should be thinking about from a design defect perspective, when they're looking to incorporate artificial intelligence into these medical devices used during surgeries?

Frederick King:

So, design defect claims are somewhat related to these failures to warn claims, but they focus more on how the device itself was designed. It's well-named. It really focuses on what the design of that device was, how it was – what issues or defects may come up with that design and if that can result in adverse events or bad things going on during the procedure. I think it could show up in the AI world with, as we discussed, there's AI tools during a procedure that can analyze different streams of data, distill it down and then give warnings to the surgical team on something that popped up or abnormal data in the procedure. If a surgeon were to expect this AI device to warn them about a potential adverse event that and the AI device doesn't do that, that could result in a design defect claim here.

Eric Rumanek:

One thing I think is an interesting kind of area of potential liability from a design perspective, is patient selection. So, we all know that surgeons can use devices any way that they see fit. It may be cleared by the FDA or approved by the FDA for certain indications, but doctors sometimes will use devices in a way that they think is best for a particular patient. If AI is kind of analyzing patient vital statistics, background information in terms of whether or not they may be an appropriate candidate for a procedure, or even not a procedure, but a particular use of a medical device. Then if something goes wrong, is there some claim that well, even though a doctor decided to use it in this way, the device itself should not have been allowed, or should have provided specific changes or modifications to prevent an adverse event even if it was not how it was specifically indicated.

Brett Mason:

A lot of that goes back to a theme I'm seeing in some of these episodes that the artificial intelligence is only as reliable as the data it is built on. Any of these devices are going to be built on data so that they know they are making the right recommendations. So, what you're talking about there, Eric is if the data that the AI was trained on is focused on a specific set of patients, which is what the medical device was approved to be used with, and a doctor then uses that medical device on a different type of patient, the AI may not have been trained on that patient. The AI may not have any data to pull from on that patient.

So again, from a manufacturer perspective, would you think, Eric, that it has to be very clear in the labeling and in the description to the medical staff, these are the patients that this AI, that this product has been trained to provide recommendations for? Any other patients may not result in the best recommendations from this product? Do you think something like that would need to be included?

Eric Rumanek:

I mean, I think that's what you try to do. But I just don't know that that again, if you think about, would that help in a litigation context? Sure. But it doesn't keep you out of the litigation. And then once you're in the litigation, especially in disclosure or warning type claims, it's always, “Well, yes, it was in there. But you could have said it more clearly.” Or, “It should have been, in larger font and box.” Or, “Well, yes, it says that in the label. But the rep never told me that and we trusted them.” So, I do think it's a good step to take. I think the more that you can document these things in writing, maybe in purchase agreements and contracts, in labeling anywhere where you can document things in writing, I think it helps. But is it the silver bullet that will keep you from litigation? I don't know that it goes that far.

Brett Mason:

Well, we certainly don't have any case law on that, because we are at the very beginning of all this, and I'll be interested to see what happens over the years. I'm sure we'll be seeing plenty of litigation around it.

Eric Rumanek:

I do think one thing that the listeners need to keep in mind is that manufacturers are much further ahead in terms of technology development than the medical teams are. So, these doctors, they're treating patients, they're performing surgeries, they're trying to keep track of medical literature. The nurses are doing their tasks and responsibilities. So, technology is moving a lot faster in my experience than doctors’ offices are. It just seems like there's a larger and larger reliance on companies, not only to understand the technology, but to communicate what the doctors need to know to the doctor and the medical staff, which again, it makes it a really difficult situation when then you get into the litigation context and the doctors’ office is saying, “Well, I trusted them to tell me the important things I needed to know.”

Brett Mason:

Absolutely. I agree. There are a few other things that we thought could be helpful for medical device manufacturers to consider when thinking about incorporating artificial intelligence. Those two things are confidence scores and citations. Frederick, could you explain kind of what those are, first starting with confidence scores, and then citations, when we're talking about artificial intelligence and how those could be useful things to incorporate so that when medical providers use the devices, they can use them more effectively and safely?

Frederick King:

Sure. So, confidence ratings, I think are very valuable way for the surgeon to be brought back into understanding how helpful that AI tool will be. Confidence ratings, the AI, of course, grabs on to vast amounts of data and distills it down into recommendations on here's a potential risk, or here's where I think we should go next.

The AI, of course, doesn't know everything at all times. So, I could say, I have looked at this data and say I've looked at the patient's vital signs, and I've seen through the cameras where we're at in the procedure. I think that there is a potential risk coming up, a potential blood clot, so to speak, or blood loss. I'm not so confident in that though, because I'm not the surgeon. I'm just AI. So, AI could then add in a confidence rating for how confident it is. Maybe it's only 50% confident. That helps a surgeon then to understand, “Okay, well, AI is suggesting this, but it's not very confident. Let me think about if this recommendation or this warning is actually applicable here.”

If it’s more confident, though, if it says, “All right, we're getting to the point where during a heart replacement, or a heart transplant surgery, we're getting to the place where we're about to switch out the old heart with a new heart, and we should really be careful to avoid any potential infection. I'm 95% confident that there's a high risk of infection here, the surgeon will be obviously more understanding of, “All right, the AI knows what it's talking about here, or knows a lot of data, and it's very confident and will be more likely to rely on AI there.”

Eric Rumanek:

Frederick, the example that I come up with. Again, I think all of these things are very well-intentioned. But you see this now, if you're tracking sports on like your phone, and at any given point in a game, there will be a win predictor. It may say, “Well, at this point, this team has a 95%-win probability.” You're very confident in that because you see the win probability. But then we've all watched games where there's some miraculous comeback, right? And you go in the span of sometimes seconds from a 95%-win probability, and it goes the other way.

So, you just see how it look – does that mean the probability was wrong? No. But when the outcome isn't the outcome, that is the most probable outcome, and then you get into litigation where people are relying on the data that make decisions, you can have the most well-intentioned, well-designed data, but are you taking on additional liability risk by putting it out there?

Brett Mason:

Right. What comes to my mind and these confidence ratings or confidence scores, we understand that what we're saying here is the AI says, “Hey, I'm 40% confident that this adverse event might occur soon.” That could later be seen as, “Well, I thought that was saying there was a 40% risk of that adverse event.” That's not what the AI was saying. The AI is saying, “Based on the data I have, and what I'm seeing, I am 40% confident that this is a possible adverse event.”

Just like any industry, where we have less skilled or less experienced practitioners. If we have a newer medical school graduate, they might be prone to rely more on an artificial intelligence recommendation than a doctor that's been practicing for 30 years. This is something the AMA is talking about, they want doctors to still be using their own independent clinical judgment and making decisions for patients, even if they have a recommendation from an artificial intelligent assisted or artificial intelligence software medical device. I think that's something the manufacturers also need to continue to support.

At the end of the day, what's best for the patients is what both medical device manufacturers and the healthcare providers have in mind. Using the strength of the technology, combined with the independent clinical judgment and assessment of the healthcare providers in the room, is what's going to get to the best outcomes for patients. Over-reliance on either is going to be a problem, I think, and finding that balance between the two is the tension that we have in these medical devices. What do you guys think about that?

Eric Rumanek:

So, the only thing I would say is I do think that there's a tension there. But it feels to me like manufacturers are stepping into a risk area that previously was kind of solely occupied by the medical team. The company used to provide, they would provide a label that has risks and instructions and doctors might be trained, but during the surgery, during the procedure itself, the medical team is there making decisions.

Again, I think, does it lead to better outcomes for patients? Probably so. I think that's probably a good thing. But does it also lead to the potential for manufacturers to step into liability somewhere that previously they didn't? I think that risk is there as well.

Frederick King:

Definitely. I think that AI has a great way of helping surgeons or helping physicians with their procedures. It certainly won't be replacing them anytime soon.

Brett Mason:

Well, that's good. We're not going to be seeing any robotic surgeries completely done by an artificial intelligence anytime soon. I don't think so. For all of our doctor friends and colleagues, your jobs are safe. Well, Eric, Frederick, thanks so much for joining me. This has been a great conversation. I would love to have you guys back on future episodes to talk more about things that medical device manufacturers can be thinking about in incorporating artificial intelligence into their software.

For our listeners, please don't hesitate to reach out to me if you have questions, comments, or any topic suggestions. You can reach me at brett.mason@troutman.com. If you have complaints, maybe don't reach out, but I’ll still read them. If you want to subscribe to this podcast, you can do so anywhere you'd like to listen to your podcasts on Apple, Google, or Spotify. There are other Troutman Pepper podcasts, if you're interested in listening. We're going to continue putting out episodes on this podcast on all sorts of issues touching on artificial intelligence, the legal impacts, and how that will play out in the healthcare space, so thanks for joining us today everyone.

Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.