The Good Bot: Artificial Intelligence, Health Care, and the Law

The FDA's Response to AI Medical Innovation

Episode Summary

Brett Mason, Judy O'Grady, and Kyle Dolinsky discuss the FDA's approach to regulating AI in health care.

Episode Notes

Join Troutman Pepper Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.

In this installment, Brett is joined by Partner Judy O'Grady and Associate Kyle Dolinsky to discuss the FDA's approach to regulating AI in health care. Topics include AI's current and potential applications in medical devices and drug development, the FDA's proactive stance on AI innovation, and evolving regulatory frameworks.

Episode Transcription

The Good Bot: Artificial Intelligence, Health Care, and the Law — The FDA’s Response to AI Medical Innovation
Host: Brett Mason
Guest: Judy O’Grady and Kyle Dolinsky
Recorded: 5/8/24

Brett Mason:

Good afternoon, everyone and welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. As a trial attorney here at Troutman Pepper, my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, as a self-proclaimed tech enthusiast, I am also deeply fascinated by the role of technology in advancing the healthcare industry.

Our mission with this podcast is to equip you, our listeners with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector. If you need a basic understanding of what artificial intelligence technology is, and how it's being integrated into healthcare, I highly recommend you start out by listening to our first episode of this podcast.

In that episode, we lay the groundwork for understanding the technology, that is the basis for all of our discussions. I'm excited to welcome two of my colleagues from Troutman Pepper, who are joining us today, Kyle Dolinsky and Judy O'Grady. Judy, why don't you go ahead and introduce yourself and tell us a little bit about your expertise.

Judy O'Grady:

Thanks, Brett. Thanks so much for having us. I'm Judy O'Grady, a partner in Troutman Pepper's Washington, DC office, and I lead the firm's FDA regulatory team, focusing primarily on drugs, biologics, and medical devices.

Brett Mason:

Fantastic. And Kyle, why don't you introduce yourself as well?

Kyle Dolinsky:

Thanks, Brett. I'm happy to be here. My name is Kyle Dolinsky. I'm an associate at Troutman Pepper in our Philadelphia office. My practice consists of FDA regulatory counseling in the food, drug, biologics, medical devices space. I do fraud and abuse counseling as well, and I do litigation in the life sciences space.

Brett Mason:

So, as you can probably tell from Judy and Kyle's descriptions of their expertise, today, we are talking about food and drug administration take on artificial intelligence. Specifically looking at the past of what the FDA has said, and into the future, what they're anticipating in regards to the use of artificial intelligence in the area. So, let's talk about where we might expect to see artificial intelligence in the FDA regulatory context. Kyle, could you start us off with that?

Kyle Dolinsky:

Sure, Brett, AI has been and will likely continue to be incorporated into medical devices. That includes everything from diagnostic tools to therapeutic devices. AI can help analyze medical images or predict patient outcomes based on their health data.

Brett Mason:

That's definitely one of the things that we're seeing for some of the products that the FDA is looking at. But Judy, what about the development process for drugs and biologics? Is that something that we're anticipating seeing AI show up in?

Judy O'Grady:

Yes, definitely. AI has a significant role there as well. It's being used to streamline drug development process, from drug discovery to clinical trials. AI can help identify potential drug candidates, predict how a drug will interact with the body, and even help design clinical trials.

Brett Mason:

I think that's so fascinating, Judy. One of the things I've been seeing in the news is that the use of this technology for these clinical trials can actually speed up the process and thereby make the process less expensive. Is that something you anticipate can happen if artificial intelligence is being used in that process?

Judy O'Grady:

I think that's absolutely correct. A good example would be, previously with diagnostics, you had to look for a specific marker, determine that marker was associated with the disease, how prevalent that marker was in the disease population, and then build your diagnostic device from that. Now, you can actually just take a profile of the biomarkers in a patient or subject's sample. And, you can kind of readily compare that to all of the other profiles in your database and come up with a host of biomarkers that are associated with the disease and different levels of association based on that simple analysis that the technology is doing for you. I do think that it could in in some instances speed up the process. Obviously, there'll be a learning curve on the FDA side to get more comfortable with this. But yes, in the long run, I think we'll see some efficiencies.

Brett Mason:

Wow, that's really fantastic. Kyle, can you tell us what has the FDA been saying about AI? Have they been looking at this longer than the rest of us with the advent of ChatGPT last year?

Kyle Dolinsky:

The FDA is actually pretty proactive in addressing AI. They touched on AI in detail for the first time in a 2019 discussion paper. There, they recognize the potential of AI and machine learning in healthcare. FDA sees it as a tool that can help deliver safe and effective software functionality and improve the quality of patient care.

Judy O'Grady:

That's right, Kyle. Much of the focus in the 2019 paper was on existing policies tailored for software as a medical device, but not necessarily specific to AI. FDA at the time acknowledged that they would have to reimagine the approach to premarket review of AI and machine learning-driven devices, including any software modifications. They also acknowledge that there will have to be a balance between maintaining reasonable assurance of safety and effectiveness, while also allowing AI and machine learning-based software as a medical device to continue to learn and evolve. That reasonable assurance of safety and effectiveness is key because the FDA is keeping the same standard that they use for other devices and the vast majority of devices that are on the market through the 510(k) pathway. In that paper, the FDA did propose a total life cycle process, and that would be the regulatory framework that would help achieve this balance.

Kyle Dolinsky:

The FDA also emphasized the importance of a predetermined change control plan in their proposed regulatory framework. This plan would include the types of anticipated modifications based on retraining model update strategies, the methodology used to implement those modifications, and procedures for managing the risks associated with those modifications.

Brett Mason:

Sounds like in 2019, the FDA was doing a lot of looking forward and thinking about what they were going to need to do. But now, here we are in 2024, and we're seeing a whole lot more going on with generative AI in every industry. So, has the FDA come out with some new discussion papers or new guidance that folks should be aware of?

Judy O'Grady:

Yes, they have, Brett. The FDA's most recent paper published in March of 2024 describes FDA strategy for incorporating AI into medical products, reflecting FDA’s commitment to foster innovation while safeguarding patient health. To do so, FDA identifies four priorities for development and use of AI across all medical product life cycles. The first is fostering collaboration to safeguard public health. The public, the applicants collaborating with the FDA, and with other third-party entities with expertise in the space. The second is advancing development of regulatory approaches to support innovation. Third, to promote developmental standards, guidelines, and best practices. That's where these third-party type accrediting entities that we see in other spaces, then FDA relies on, sometimes could play a big role here, and to support research to evaluate and monitor AI performance.

Kyle Dolinsky:

The FDA noted its intent to work with third parties, including developers, academic institutions, foreign regulators, and others. Soliciting input on subjects like algorithmic bias, transparency, cybersecurity, and quality. The FDA will also focus on educational programs geared toward industry about safe and ethical use of AI in medical product development.

Judy O'Grady:

The FDA also stated in its most recent paper that it intends to continue building on existing good machine learning practice guiding principles. We'll also use Real World Performance Analytics to monitor the performance of AI and machine learning-based software as a medical device in a real-world setting. This would involve the collection and analysis of data related to device performance, user feedback and other relevant information.

Kyle Dolinsky:

Lastly, in the March 2024 paper published by FDA, FDA stressed the importance of post-market surveillance in their proposed regulatory framework. They believe that post marketing surveillance can help detect and address any issues that might arise after the device has been marketed. This includes the use of active surveillance systems and passive surveillance systems.

Brett Mason:

This is a lot. Can you, Kyle, just for our listeners who are not as up and up on the regulatory framework as you and Judy are, can you explain what you mean by post-market surveillance?

Kyle Dolinsky:

whether a device is approved through the pre-market approval process or cleared through the 510(k) clearance process, manufacturers have what are called post-marketing surveillance obligations to report the FDA any adverse events in what's called a medical device report.

Judy O'Grady:

Also, manufacturers have to evaluate what are called product complaints. So, in a traditional device, that would be the camera on my x-ray machine broke. With respect to AI and machine learning, I think that's going to open up a whole host of issues in terms of what does a product complaint look like with the technology, and how can those be addressed because of how unique it is.

Brett Mason:

That makes a lot of sense. How can the FDA really know exactly what it needs to look at ahead of time, until we have more experience with these, and there are medical devices with AI software that have been approved by the FDA. I think most of those currently are diagnostic tools that are being used in kind of radiology settings. I'll be curious to see kind of what the post-market surveillance is with those that are already on the market. It's really clear that the FDA is taking a comprehensive approach to regulating AI, and healthcare, not only considering the safety and effectiveness of these devices as we would expect them to do, but also how they can be improved and modified over time to better serve patients.

That brings us to looking to the future. What are some of the major issues that the FDA will have to address based on what we've seen so far, Kyle? Why don't you let us hear your thoughts on that?

Kyle Dolinsky:

I'm most interested in knowing how FDA will approach substantial equivalence in the context of medical devices seeking 510(k) clearance. Like I mentioned just a minute ago, there are a few different pathways to getting on the market for medical devices. The traditional idea of FDA approval really applies to pre-market approval applications, but there are other ways too, and one of those is 510(k) clearance, which requires FDA to determine that your device is substantially equivalent to another predicate device. In the context of AI-powered devices, the question is, how does one determine what is substantial equivalence? Can a device that's running on for example, open source GPT AI be substantially equivalent to a device running on proprietary machine learning platforms?

Relatedly, we have to ask whether we're going to see a disproportionate share of AI devices that are being approved through the PMA or de novo pathways, as opposed to through a 510(k), which like I said, is where most devices in the non-AI context are getting on the market today. That's what I'm going to be keeping an eye on.

Judy O'Grady:

Kyle, that's actually a very good point, because when the FDA released its most recent paper that we've been discussing on AI in March of 2024, it also that same month released a draft guidance document on the 510(k) process and cybersecurity, so devices that require cybersecurity. In that sense, they were looking at the same question that you raised, like how do we determine for purposes of substantial equivalence with respect to a cybersecurity plan that can be very, very different, and unique between companies what the FDA would look at, and be satisfied with just such that they would say it was substantially equivalent.

There, in that guidance document, I think there are some suggestions of what we might see with respect to AI. But it really is in terms of broad things to be thinking about and broad concepts of documents that the FDA would require. So, there's still a lot to be learned in the cybersecurity space and certainly, in the AI space. I mean, I think that one of the major issues that I see is the FDA is always competing for talent. This is another area that they're competing for talent. This is highly specialized knowledge, and unfortunately, can go to the private sector often and be paid a lot more. So, the FDA is clearly envisioning that collaboration is 100% necessary to be successful in this space, because they're going to have to rely on not only the applicants, but some third-party entities that have the knowledge and are willing to, for example, set standards, or work with the FDA in assessing what a reasonable policy for substantial equivalence is. I think that'll be a big challenge, not a new one, but particularly big in the unique space. So yes, that's something that I'll be watching, and hopefully, everyone will be up to the challenge, and willing to collaborate.

Brett Mason:

That's so much to consider. I really appreciate you, guys, Kyle and Judy, for shedding light on this complex issue. Clearly, as the FDA does address these issues, we're going to have to have you back on the podcast to talk about it. Thanks to our listeners. As always, please don't hesitate to reach out to me at brett.mason@troutman.com if you have questions, comments, or topic suggestions. We're going to do our best to link the different resources that we discussed in today's podcast in the show notes. You can also subscribe and listen to this podcast and other Troutman Pepper podcast wherever you listen to your podcasts, including on Apple, Google, and Spotify. We will see you next time.

Copyright, Troutman Pepper Hamilton Sanders LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman Pepper does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper. If you have any questions, please contact us at troutman.com.