Brett Mason and Emma Trivax delve into the current efforts to legislate artificial intelligence (AI) and explore the federal landscape of AI policy.
In this episode of The Good Bot, Brett Mason and Emma Trivax delve into the current efforts to legislate artificial intelligence (AI). They explore the federal landscape of AI policy, including the newly established Bipartisan House AI Task Force, and proposed federal legislation, which would grant prescriptive authority of drugs to AI and machine learning. Additionally, they discuss the Virginia High Risk Artificial Intelligence Developer and Deployer Act and how states may legislate differently depending on the interests of their residents.
The Good Bot: Artificial Intelligence, Health Care, and the Law
Evolving AI Legislation: Federal Policies, Task Forces, and Proposed Laws
Host: Brett Mason
Guest: Emma Trivax
Recorded: March 26, 2025
Aired: May 20, 2025
Brett Mason:
Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. As a trial lawyer at Troutman Pepper Locke, my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, as a self-proclaimed tech enthusiast, I'm also deeply fascinated by the role of technology in advancing the healthcare industry. Our mission with this podcast is to equip you with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector.
I'm excited for our podcast today, where I am joined by my colleague, Emma Trivax, and the two of us are going to be discussing today current ongoing attempts to legislate around artificial intelligence. Thanks so much for being on, Emma.
Emma Trivax:
Happy to be here.
Brett Mason:
Let's just kick it off, talking from a federal side. Obviously, we've had a change in administration, which could change approach, but the current presidential administration is not the only part of the federal government that's looking at AI. Emma, could you give us a brief introduction just to where we stand currently in the federal landscape of AI policy?
Emma Trivax:
Absolutely, Brett. The federal government has really been actively involved in AI policy for many, many years. Now, this includes several executive orders to ensure that AI safety and security and securing voluntary commitments from leading AI companies and trying to establish policies for AI acquisition and use, really ensuring that those are done and those commitments are completed. Additionally, significant funding has been allocated from the federal government to support AI research and development, and frameworks are being and have been developed to manage AI-related risks.
Various acts and strategies have been proposed, or implemented at this point to coordinate AI efforts across federal agencies and ensure compliance with constitutional and legal standards. Now, last December, the Bipartisan House AI Task Force issued its final report containing numerous findings and recommendations across various domains.
Brett Mason:
Thanks for that background, Emma. Again, what the agencies are doing can be administrative dependent, but it's a very comprehensive approach we're seeing on the federal side. I'd love for you to tell us more about this Bipartisan House AI Task Force and its contributions and the things that we hear from that.
Emma Trivax:
Yeah, so this task force was established, it was back in 2024, February of 2024. It spent the better part of last year researching and analyzing various AI-related issues. Their report, which, like I said, came out in December, highlights the US's leadership in responsible AI innovation, while considering appropriate guardrails to safeguard against current and emerging threats with using AI. The report actually included 66 key findings and 85 recommendations across multiple domains, like data privacy, national security, and even civil rights, and industry sectors, including healthcare, which, of course, is very pertinent to both of us and other areas like financial services.
Brett Mason:
I do find it intriguing that almost every time we're seeing legislative action, or discussion, even, if you will, rather than action around AI, healthcare and the healthcare industry is almost always mentioned. I think that, again, just highlights the understanding that technology is being used widely in healthcare and how healthcare is being deployed and received by patients across the United States. Of course, we anticipate AI is going to become a part of that. I think it's great that the bipartisan house AI task force is considering that as one of the key industries to be keeping an eye on. Do you agree?
Emma Trivax:
Absolutely. I agree. I think, even our own clients that we've been seeing, we are getting questions over and over again about how AI and really, any technologies, it doesn't have to be AI, can be implemented and worked into our healthcare clients' day-to-day practices, so it's really emerging right now.
Brett Mason:
Now, did the report identify any key overarching topics that lawmakers should keep in mind?
Emma Trivax:
Yeah. The task force established several principles to frame their policy analyses, and these principles, I think there are seven of them. These principles are: identifying AI issues as novelty, promoting AI innovation, protecting against AI risks and harms, empowering the government with AI, affirming the use of a sectoral regulatory structure, so just looking at different sectors in the government, taking an incremental approach, and keeping humans at the center of AI policy.
Brett Mason:
Again, it's interesting that we see this theme arising in a lot of different areas, whether we're talking about DOJ compliance, we're talking about state legislative action, keeping humans the center of AI focus is one of those things that you hear a lot. Can you just break down those various principles and give us a little more detail about what each of those mean?
Emma Trivax:
Absolutely. I'll try to keep it brief, because I really could talk about these for a long time. There's a lot of interesting things that this task force identified. The first principle, like I said, was identifying AI novelty. What does that mean? First, we need to figure out if the AI issues that we're dealing with are completely new, or if there are already laws that cover them. We need to ask ourselves, is this issue truly new, because of AI's unique capabilities? Is it an existing issue that's been significantly changed by AI? Or is it an existing issue that AI hasn't really changed? Based on those answers, the federal government would then tailor their regulatory approach accordingly. Do they need to implement a new law, or do they not?
The second principle is promoting AI innovation. Really, the core of this principle is just that the US has been very much a leader in AI development and deployment. The goal here is using all these other principles in tandem, so that the US and other developers and deployers of AI can really keep innovating.
The third is protecting against AI risks and harms. This is actually really important and we see this question come up a lot. How are we protected from the harm that could come from AI? It really comes down to lawmakers. Lawmakers have a huge job here. They need to tackle both accidental and malicious uses of AI, because both can happen. There's not always intent, but sometimes there is. What that really means is lawmakers need to consider combining technical solutions with those legal policy measures to mitigate risks. Let's not forget. AI can help solve some of the problems that it creates. I can't remember who it was with, but I saw an interview recently from a lawmaker who said they used chat GPT to help propose some language to legislate AI. It was very fascinating.
Brett Mason:
Very meta.
Emma Trivax:
Exactly. The next principle is government leadership and responsible use. I just talked about lawmakers, but really, the government as a whole needs to lead by example, by adopting responsible AI principles that fosters public trust really, really well. If we see government agencies have AI policies in place, we will see a lot of private sectors begin to implement that as well, because the government is always the first step in getting these policies widespread.
The fifth policy is supporting sector-specific policies. Regulators need to use their expertise in these different sectors to address AI issues within their domain. An interesting example is the recent HIPAA update to the security rule, where AI isn't discussed a ton, but really, all of these new proposed rules in the security rule are going to apply very clearly to AI, and HHS who put out this proposed rule at this point was in the best position to come up with these rules and how it could apply to AI. That's what we mean when we say sector-specific policies, because they are the ones in the best position to regulate their own industry.
The next principle is taking an incremental approach. We get really excited. AI is happening, it's coming, but AI policy needs to evolve as the technology evolves. Sometimes we need to start with those more small adaptive laws, just as we're going, and then we can adjust them over time, rather than coming out swinging with these big sweeping legislations and laws that potentially will need to be revised over and over again. The incremental approach helps us both save time and it's efficient, but it's also taking those small steps really leads the way to come up with a comprehensive legislation in the future.
Brett Mason:
Emma, that really reminds me of a concept that we talk a lot about in litigation is that we always say, the law lags behind the science, right? I think especially here, the science of AI has changed so much in the past two years, even that attempts to legislate it really do need to be focused, rather than these broad sweeping rules that will not make sense in another year when the technology advances.
Emma Trivax:
Yeah, exactly. Again, I keep coming back to privacy and security, but we're still figuring out how to get those laws out on a state-by-state level. States are coming out regularly with new privacy and security laws, and that's been in existence for a long time, the concept of electronic health information. Now, to think of doing that with AI, it doesn't quite make sense. You're exactly right. We need to really go step-by-step here as not to push the boundaries too far.
The last principle, as you just mentioned earlier, and as I previewed is keeping humans at the center of AI policy. It's always very interesting when I say like, humans need to do something, but it's really true. We need to focus on the human impact of AI and make sure that people's freedoms and civil liberties are protected. AI, for example, we've seen lots of instances of bias come from AI and other issues that really impact the people using AI. We need to keep that front and center as these new laws are coming out. We need to really just address. We don't really think about AI being biased, or AI creating hallucinations that could harm people, but that needs to be front and center here.
Brett Mason:
Well, I'll be interested to see these principles in action as the federal government looks to legislate around artificial intelligence. That just brings me to our next topic. Let's talk about the proposed legislation in the US House of Representatives, regarding AI and prescription drugs. Emma, can you tell us what is that proposed legislation?
Emma Trivax:
Yeah. It's really interesting, but in January of this year 2025, the Health Technology Act of 2025 was introduced in the House. This proposed legislation itself, it's very short, but it would have a sweeping significant impact on healthcare as we know it today. The legislation would amend the Federal Food Drug and Cosmetic Act to classify AI and machine learning technology as a “practitioner licensed by law” to administer such drugs, as long as the AI, or machine learning is one, authorized by the state to prescribe the drug involved and two, is approved, cleared, or authorized by the FDA. This would significantly change healthcare practices by removing licensed individuals from the prescription process.
Brett Mason:
Again, as a litigator who is on the back-end dealing with prescription pharmaceuticals and medical devices, again, having the doctor, the human, as the prescriber, as the administrator is always a huge part of that litigation. This is very interesting to me that there's a proposal to somehow remove that part of the equation. It sounds like a major shift in different from keeping humans at the center. If this were to pass, would current state laws actually allow AI, or machine learning to issue prescriptions to patients?
Emma Trivax:
Well, most states right now only permit licensed individuals to practice medicine, or nursing and to have that prescriptive authority. Right now, no state is allowing, as far as I know, allowing AI or machine learning to prescribe drugs or devices. State laws would have to be revised to expand that prescriptive authority to include AI or machine learning. At this point, the federal law wouldn't have much teeth, but who knows what could happen.
Brett Mason:
If states were to expand that prescriptive authority to AI or machine learning, are there broader legal and regulatory impacts at the state level?
Emma Trivax:
Absolutely. There are a number of regulatory and really, practical issues to consider. One of the most interesting issues to me personally is the impact it would have on state corporate practice of medicine laws, or CPOM laws. These laws typically restrict non-licensed entities from practicing medicine. If AI or machine learning became a licensed practitioner, it's really unclear how this would interact with those CPOM laws. For instance, would only licensed individuals be able to offer the prescriptive AI, or machine learning, or could it be offered by any company? We would also have to think about how the use of prescriptive AI, or machine learning interacts with state malpractice laws in the event of adverse outcomes. From a practical perspective, we'd have to think about reimbursement issues. Today, when a provider issues a prescription, they're usually reimbursed for the actual professional service that led to the prescription, like an office visit. But if AI, or machine learning is issuing the prescription, we really don't know and we'd have to think about whether or how third-party payers would reimburse the entity offering the AI, or machine learning company for issuing the prescription.
Brett Mason:
Definitely a lot of implications if this were to be allowed. You're talking about malpractice, but again, as a products liability litigator, I'm also thinking about now, we're really blurring the line between malpractice and products liability. I mean, the AI and the AI system and software would be considered a product. Issues, or “malpractice,” or negligence with prescription decisions would now be more of a product liability issue, rather than a malpractice issue, which to me is scary and complicated. This is the world we live in where the technology may be able to help us make safer and better decisions and we're going to have to deal with how that plays out in the law as a consequence. Do you agree?
Emma Trivax:
Yeah. I will say, as a non-litigator, on the front end, we're going to try really, really hard to understand these laws and figure out those separations so that you as a litigator are not sent to court to litigate some crazy web that needs to be unwoven. Really, who knows what's going to happen at this point? It really is getting incredibly complicated, which I think is why, just to go back really quickly, those principles and really that incremental implementation of laws is really important, because this law here could be so sweeping and has a lot of considerations that I don't know have been fully fleshed out.
Brett Mason:
Yeah. Clearly, if that were to pass, it would lead to significant changes if the law would have its intended effect. Let's talk about the states. The states are also continuing to take legislative action around artificial intelligence, even related to healthcare and the absence of this overarching federal legislation. One of the more recent efforts by a state to legislate around artificial intelligence was the Virginia High Risk Artificial Intelligence Developer and Deployer Act. Emma, can you tell us about that act and the implications for healthcare from the act?
Emma Trivax:
Yeah. Thank you for saying that. It's a mouthful of an act title. I'll just call it the act. For some background, this act was just passed by the Virginia General Assembly on the 20th of February in 2025. But it was actually just vetoed by the governor on March 24th. The bill, before it was vetoed, it aimed to regulate the development, deployment, and use of high-risk AI systems in Virginia. There were a lot of key provisions here. For instance, developers of high-risk AI systems were required to protect consumers from algorithmic discrimination and provide detailed documentation about the AI system's uses and limitations. That was for the developer.
Now, the deployers would have needed to implement risk management policies, conduct impact assessments, and disclose AI usage to consumers. There were exemptions in the bill for compliance with existing regulations, laws, enforcement, cooperation, and actions necessary for public health and safety. There's always an exception to a rule, but they were pretty narrow. It would have been very widely applicable, this act. The AG, the attorney general would enforce with civil penalties, and the act was going to be effective on July 1st of 2026.
Brett Mason:
Let's pause there for a second. The Virginia bill sounds eerily similar to the Colorado AI Act that passed last year that you and I have discussed before on this very podcast. As predicted by many, seems like other states are taking steps to follow Colorado, even using that developer, deployer, high-risk language that we see in the Colorado AI Act. As you mentioned in the beginning, just recently in the past few days, the Virginia governor chose to veto the bill that had passed, right? What do we know about the veto? What was the justification behind it?
Emma Trivax:
Yeah. It was really interesting. This act actually passed very quickly. It was really easy, it seemed, for this act to pass in the state's legislative process. We were all a little shocked when we saw that the governor vetoed the bill. The governor actually did come out with a statement as to why he decided to veto the bill. There were several points, but ultimately, it came down to a balancing test where he was weighing economic growth and innovation against regulation, which is a huge theme in AI regulation these days. More specifically, the governor was really concerned with ensuring that Virginia remained a hub for businesses, like AI startups.
Apparently, the Virginia administration has launched over 10,000 new startups in August 2024, many tech jobs, AI jobs. This has attracted a lot of significant economic growth, so he was concerned that this framework would be far too burdensome on those businesses, particularly the smaller firms and those startups. The governor also pointed out that there were many laws in place that protect consumers and regulate company practices regarding discrimination, privacy, data use, etc. So, he felt that the bill wouldn't account for the fast-evolving nature of the AI industry, and would place an undue burden on companies, which actually, again, goes back to that task force, and what they were saying about ensuring that laws don't already cover AI issues and ensuring that the laws keep up with the pace of AI. It's very interesting to watch this play out in real time.
Really, the last point the governor made with his veto was that he believes the government's role should be to enable and empower innovators, rather than stifle progress with regulations. Again, he thought this would harp on job creation, business investment, and the ability to continue innovating technology in Virginia.
Brett Mason:
I think an important point that you mentioned there was the fact that there are other laws already in place that can be used to govern AI use and deployment by companies within any industry. I think that's something we're already seeing. We've already seen state attorney generals taking action against companies, or AI software, whether it's marketing around the AI capabilities, or the actual functionality of the AI, there are current laws in place that we've seen attorney generals not hesitate to move forward on enforcement for AI issues.
At the end of the day, I think that's the goal of the law, right, to be able to be broad and flexible enough that it can be used despite an ever-changing world. Given the governor's veto, how do you think that play is into the current state of attempts to continue legislating AI?
Emma Trivax:
Yeah, Brett, it's a really dynamic landscape. While that Virginia Act was vetoed, other states, just like Colorado, are still pursuing their own AI regulations. Even though the Colorado Act, which was almost identical to this Virginia Act, even though that was passed and the Virginia one was vetoed, it really comes down to what each state sees as necessary to achieve its own goals. What might be important for the legislators and the governor in Colorado might be different for the same in Virginia.
Brett Mason:
That makes a lot of sense. Again, I think we keep seeing these themes as we do more of these podcast episodes, but what are the common themes or challenges that you're seeing in these legislative attempts?
Emma Trivax:
Yeah, and you all listeners, you're going to get very sick of hearing me say this, but it's the balance between innovation and regulation. Legislators really want to protect consumers and ensure ethical AI use, again, keeping humans at the center of AI use without stifling technological advancement and that economic growth. AI is evolving so quickly, which makes it difficult to create regulations that remain relevant and effective over time. That will just necessitate even more regulation, more legislation.
Again, we saw the balance skew one way in Colorado and another in Virginia, but we really need to look at each and every state, try and understand, particularly when you've got a cross-state company and harmonize those state laws, and not only harmonize within your own company, but against any future federal regulations. What we're looking at here to take a step back is a patchwork of state laws and potential federal regulations on top of that.
Brett Mason:
Unsurprising that different states have different feelings on regulations around businesses.
Emma Trivax:
I know.
Brett Mason:
Well, Emma, thanks so much for coming on today and sharing your thoughts about current legislative action around artificial intelligence. I really appreciate you being here.
Emma Trivax:
Of course. I'm happy to be here, and it's always a pleasure talking about this stuff with you.
Brett Mason:
Thanks so much to our listeners. Please don't hesitate to reach out to me at brett.mason@troutman.com with questions, comments, and topic suggestions. We always want to be providing you our listener with information on topics that you're interested in and that you're concerned about for your own business. You can also subscribe and listen to other Troutman Pepper Locke podcasts wherever you listen to podcasts, including Apple, Google, and Spotify. Thanks so much.
Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.