In this episode of The Good Bot, Brett Mason and Tom Kinney explore how artificial intelligence (AI) is reshaping the insurance and reinsurance industries. They discuss the rapid adoption of AI, the critical role of data management, and the unique risks insurers face as both enterprises and risk carriers.
In this episode of The Good Bot, Brett Mason and Tom Kinney explore how artificial intelligence (AI) is reshaping the insurance and reinsurance industries. They discuss the rapid adoption of AI, the critical role of data management, and the unique risks insurers face as both enterprises and risk carriers.
THE GOOD BOT s02e06: Insurance AI
Recorded 6/03/25
Brett Mason: Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. As a trial lawyer at Troutman Pepper Locke, my primary focus is on litigating and trying cases for life sciences and healthcare companies. However, as a self-proclaimed tech enthusiast, I'm also deeply fascinated by the role of technology in advancing the healthcare industry.
Our mission with this podcast is to equip you with a comprehensive understanding of artificial intelligence technology, its current and potential future applications in healthcare, and the legal implications of integrating this technology into the healthcare sector. I'm excited to have on the podcast today, Tom Kinney, who is one of my fellow partners here at Troutman Pepper Locke in our Insurance and Reinsurance group. Tom, thanks for joining us, and why don't you introduce yourself to the listeners?
Tom Kinney: Brett, thanks so much for having me. As you said, I'm Tom Kinney. I'm a partner in Troutman’s Washington DC office in the Insurance and Reinsurance practice group, also, in the international commercial arbitration space. My practice focuses almost exclusively on arbitrating complex commercial disputes, primarily between insurance companies and reinsurance companies. In the last three or four years or so, I've also started counseling our insurance and reinsurance company clients on their use of AI and how they should be addressing this from a number of different perspectives.
Brett Mason: That's why we want to have you on today to talk about that. Let's take a step back before we do that and start with the basics. For our listeners who are not as familiar with how AI may be an important tool in the insurance and reinsurance industry, can you talk about the role and importance of data management in this industry historically?
Tom Kinney: Absolutely, Brett. Data is foundational, really, to the reinsurance and insurance industries as a whole. If you think about it, it makes sense. Insurance companies are taking on risk. In order to do that, they need to make what they think is the best educated guess about the likely chance that there's going to be a claim on whatever cover that they're writing. Historically, that's involved collecting a tremendous amount of data through the underwriting process. Now, reinsurers in turn are underwriting the insurance companies that they are assuming risk from. These reinsurers, by definition, have access to the data written by all of these data aggregating insurance companies.
Really, historically, reinsurers can be considered some of the first data brokers in the global economy. They would aggregate data across various markets, try to get a better sense of predicting how risk patterns were emerged across various industries, including those of the casualty risk industries, the life and health industry. The insurance and reinsurance companies were at the forefront of pushing data collection forward. Really, as part of the efforts to improve their underwriting and their assessment of the risk. As a result, these insurance and reinsurance company clients we have had a lot of experience gathering, aggregating, and using data as part of their day-to-day business. It makes perfect sense why they would be quick adopters of artificial intelligence technology.
Brett Mason: Keeping that in mind, have you seen that your clients and the industry in general has been quick to adopt artificial intelligence technology in their business units?
Tom Kinney: Absolutely. I think the insurance and reinsurance industry was, if not the earliest industry to adopt the technology, one of the first. It's fascinating to learn about the way in which they went about doing this, because it largely started as a bespoke ad hoc process. These companies that have a vested interest in beating the market and a tremendous amount of data, they would have their in-house data scientists start to develop their own bespoke models, trying to find a way to improve their algorithms in-house. That’s shifted over the last few years to tapping the best available models on the marketplace and plugging them into user interfaces that they've designed for their employees to have a more streamlined approach. We're seeing it in every aspect of insurance and reinsurance business being deployed by a significant, at this point, if not a majority, certainly plurality of companies.
There's been a little bit of a discussion internally about what is AI and what isn't AI in terms of use of artificial intelligence decision-making process across business units. The data points that I'm seeing coming from the NAIC, the National Association of Insurance Commissioners, indicates that upwards of 88% of domestic insurance companies are using, or have immediate plans to use AI in one or more business units across their operations as of 2022.
Brett Mason: With that increased use and adoption of artificial intelligence technology, are you seeing there are risks, or things that companies need to be thinking about when they're moving to implement?
Tom Kinney: Sure. The adage in Silicon Valley is “Move fast and break stuff.” That sounds great when you're trying to break into a new industry, but it's a terrible motto to have when you're the business of risk. Unfortunately, we have seen that ethos infiltrate a little bit into the insurance and reinsurance space, particularly as AI has become the buzzword to get approval to fund any new novel project, right? The issues that I've been counseling my insurance and reinsurance company clients on have broken down largely into these two buckets. The first being their exposure to what we would call an enterprise risk, a corporate risk, a risk to the insurance company. Then second, because these clients are in the business of risk, we've also been counseling them on how they'd be going about addressing AI as it relates to the coverages that they're writing.
Brett Mason: Let's talk about that first bucket you mentioned, the enterprise risk, or the corporate risk internally. What does that look like and what are some of the things you're counseling your clients on?
Tom Kinney: Insurance companies and reinsurance companies are companies. They're going to have the same sorts of enterprise risks that any company would have, right? You need to make sure that your employees are trained up on the technology that they're using. You need to make sure that any confidences or privileges are maintained appropriately within the company. All of these are risk points for AI we're seeing. I'm sure that you've talked about this on the podcast in the past, Brett. Where data is shared internally within the company, there's a risk for disclosure. There's a risk for violation. That's, as a big picture, things that we're counseling clients on, but that's not unique to the insurance or the reinsurance clause text. Some of the more unique issues we're dealing with is because insurance companies are data brokers and they've maintained these files, we're seeing circumstances where companies want to aggregate their data in a way that would potentially breach the privilege applicable to, for example, the claims files, right?
A casualty insurance company often has a duty to provide a defense to its insured. It's written into the contract. As a result of that, they typically get access to the entire litigation file for all lawsuits brought against every company, or individual that they insure. Insurance companies might have an incentive to aggregate that information, load it up into a centralized model, and try to extrapolate their exposures, or identify emerging trends in the risk. That can be a very powerful incentive for them, because they want to, or there's a strong incentive to do that, I should say, because of the finances involved.
They want to figure out where to put their reserves. They want to figure out how they need to tighten up the contract worry. Counterbalancing that is the risk of waiver of privilege, right? In the United States, in most jurisdictions, companies can waive privilege information if they store it internally in a way that is not consistent with the idea that it's confidential, or closely held information. By making it available lively within the company, or to, let's say, a board of directors, or a different business unit that doesn't have a direct need to know the privileged information relevant to that specific file, there's a strong possibility that that could be found to be a waiver of privilege. These are the kinds of questions that our insurance and reinsurance company clients weren't historically asking before they started taking steps, and we've had to counsel them through the process of backtracking and maybe putting in place enterprise risk, or corporate governance policies about how they're going to go through the process of aggregating their own data, so that they don't potentially violate a court order, or violate, or eviscerate some internally held privilege.
Brett Mason: That sounds very similar to a theme that I've seen through a lot of the episodes. There's a lot of push on what can we do with this data, and less asking the question of what should we do with this data? That question is important, especially in a heavily regulated industry like insurance, and especially when you are using data that is derived from healthcare information, which again, is regulated in and of itself. That's more the enterprise, or internal risk. You mentioned there was a second bucket, which is the insured risk perspective. Did I hear that right?
Tom Kinney: Absolutely, Brett. My clients are in the business of risk, right? They write coverage for various risks, whether that be life, health, or critical illness risks, or your traditional casualty risks, professional liability, commercial general liability, or even property risks, right? Getting a handle on the parameters for the risk that they're agreeing to cover is essential to the business operations. Making sure that you have as close to an exact idea of what you've agreed to assume is essential to figuring out how much you want to charge your clients to assume that risk.
Brett Mason: I know one of the things we talked about when we were preparing for our conversation today was trying to think ahead as to what are the risks that could potentially come from AI. One of the things you mentioned was the application of legacy language and how that could come into play here. Can you talk about that and examples that you've seen?
Tom Kinney: Sure. That's a great question, Brett. This is, again, if I could accurately predict, right, this is a question I get from my client this week: “Tell me exactly what my AI exposure is”. If I could do that, I'd be working for an insurance reinsurance company. I'd be probably on a beach somewhere.
Brett Mason: We might need to use an AI.
Tom Kinney: Right. It's a real problem. We saw an example of this relatively recently in the industry with the advent of cyber risk, right? This issue of the so-called silent cyber cover, where companies thought, I'm not writing separate cyber cover, or if I am, it's not applicable to these legacy CGL policies that I was writing. I don't have to think about it. The reality was that didn't end up being the case, right? Policyholders ended up having cyber exposures, whether that be cyber-attacks, or other related breaches. Submitting claims under legacy, commercial general liability coverage, language saying, this fits. Maybe it wasn't intended to fit under language in the 1970s, but arguably, it fits. The courts largely agreed. So, we ended up seeing companies hit with massive exposures that they never contemplated when they priced the business. That can be an existential threat to the company as a going concern, right?
You need to fundamentally, to make sure that you have enough premium collected to pay out your exposure. You need to know what you're covering, what the exposure actually is. What insurance and reinsurance companies are really concerned about right now is avoiding a similar silent AI exposure, right? To do that, I've been trying to counsel them on the correct way to think about AI exposures and what the potential liabilities could be, because we really don't know, Brett.
The theories of liability have yet to declare themselves. We're seeing an emerging trend in the marketplace. I've been trying to counsel clients on maybe not the correct way to think about it, but the correct way to start thinking about thinking about it, if that makes sense.
Brett Mason: It does. Before we get to that, because I want to hear, what's the correct way to start thinking about it, what are some of those emerging buckets of litigation around AI use and AI liability that we're already seeing?
Tom Kinney: Sure. Again, these are my buckets, as you will. Again, it's not the only way to think about it, but it is the way that I've found useful to think about it. I really start the process by looking at where the liabilities start. The first would be what I call data-centric liability. Liability that stems from the data used to run the AI model. Because when you think about AI, you need three things, computing power and algorithm, and then a tremendous amount of data to both train the model and then run the model. You need the data that's training the algorithm and the data that you're inputting in order to generate your output.
We're seeing companies increasingly run afoul of existing regulations by using data in a way that is inconsistent with general privacy statutes, or more specific statutes. The exposure coming from that is as a result of the company's use of AI, but it's not anything unique to AI, would’ve been a problem in any other instance. A great example of this in the insurance, reinsurance context would be the Lemonade class action from a few years ago, where Lemonade, a forward-thinking, tech savvy insurance company got in trouble because their AI facilitated claims handling process ran afoul of Illinois's Biometric Information Privacy Act. They were scraping physical biomarkers from user-submitted claims videos and storing them to aid in a fraud detection algorithm that they had.
The problem with that is they hadn't informed the policy holders that they were going to do that. They hadn't informed the Department of Insurance, or the Illinois regulators that they were going to do that. They didn't have an established plan for safely storing or destroying the data when it was no longer being used. The problem for Lemonade was that each established violation came with, I believe, a $5,000 civil penalty. When you get it with a proposed class action, those numbers can skyrocket.
Ultimately, that lawsuit settled. I think Lemonade had to pay 4 million dollars. That is a good example that I like to show to my insurance, reinsurance clients of where it's not just the algorithm you need to be concerned with. It's also the data. If you're going to use AI, you need to make sure that you're complying with every existing data privacy statute regulation, which in the US, you're talking 50 separate states that regulate insurance at the state level, plus national data privacy efforts and non-insurance specific state privacy regulations. It's a complicated web that you're going to have to try to comply with if you're going to rely on AI.
Brett Mason: That's the data-centric liability bucket. I think there were a couple other ones that you mentioned. Model training liability, disparate impact liability, and model output liability. Could you talk about each one of those and what trends you're seeing in the area around those areas of liability?
Tom Kinney: Sure. The model training liability, an example of that would be the OpenAI with Nvidia litigation, where copyrighted works were used, were available on the Internet and used to train AI models. Now, the copyright holders have filed suit alleging violations of their copyrighted works. There's an open question as to whether or not the training of AI models is or is not fair use of publicly available copyrighted works. I'm not quite sure how that litigation is going to pan out, but even if OpenAI prevails, the fact that they've had to engage in years-long expensive litigation is something that insurance and reinsurance clients would be concerned about, because as we noted, oftentimes, there's a duty to defend. I'm sure OpenAI is not footing that bill all by themselves. There's a liability carrier who's picking up part of the tab.
Brett Mason: Absolutely.
Tom Kinney: It's also something to be thinking about, if you're not as big a company as OpenAI. The way in which you go about training your models can expose you to separate theories of liability, apart from the data privacy, or data-centric liability we talked about in the Lemonade class action. There's also the disparate impact theories of liability, where the results of the output of the model may have a disparate impact on protected classes. Potentially, two of the biggest flaws with AI that have been talked about aside from hallucinations would be the lack of transparency into the decision-making process and the high risk for algorithmic bias coming from AI models. Lack of transparency – it’s very challenging to query the decision-makers if the decision-maker is a model. You can't pick up the phone and ask the model why it reached the result that it did like you could with a colleague.
The algorithmic bias component comes in when you realize that a lot of the underlying training data reflects late biases from past eras, or past practices that maybe we wish had been a little bit different. A prime example of this would be the Workday litigation that's going on, where Workday is being sued as an employment decision-maker for licensing a model to a company to assist with vetting their employment applications. The allegations are that the employment applications were judged against a database that did not correct for historically racist hiring practices.
As a result, the model was projecting predominantly African American candidates and elevating predominantly white candidates. The litigation is still ongoing, but as an example where even if the model works as it's anticipated, the result may have a disparate impact on a protected class and expose you to liability. So, you need to be prepared for that, you need to be testing for that, and trying and take steps to avoid that. Even in doing so, that still may not be enough to prevent lawsuits from being filed in the future.
Brett Mason: What advice are you giving to your insurance or reinsurance clients to try and prepare and foresee what they need to do around these types of liability involving artificial intelligence?
Tom Kinney: The first step is to make sure that you know your markets. Make sure you know the areas that you're writing in. Make sure that you know your contract language. Really don't just rely on the fact that it's been good enough historically. Go over it with fresh eyes. Stay abreast of emerging trends in the markets where you're writing, whether that be professional liability, whether that be general casualty, whether that be in – the geographic nature of these risks is not to be discounted, right? Are you writing primarily in Louisiana as opposed to California, or Texas? Knowing the litigation trends in those areas is going to be useful, because that's going to have somewhat of an impact on the emerging risks as they're coming out.
I think it's also, it's important to be intentional about updating your contract wording. Again, speak with outside counsel, speak with inside counsel. Solicit different perspectives. Pressure test your language to see, hey, we're seeing these AI related lawsuits come out. They haven't hit our policies yet. What would our response be if we got a claim on this under these facts, under these circumstances? Are we covered? Are we not? If you think that there's a risk that you aren't covered, immediately tighten up the language, be upfront with your policyholders, or your seeding companies and communicate, hey, we are or are not willing to take on this risk. If we're taking on this risk, here's the appropriate premium upcharge we're going to charge you for that. This is how much we're valuing that. Because ultimately, we don't want a situation where policyholders think that they're getting one thing and the insurance companies think that they're getting another, right? We want to make sure that there's enough cover available for the people who've purchased it. In order to do that, you need to charge an appropriate premium. That's at the end of the day.
Brett Mason: I would think that communication is important, also, for the policyholders to understand and be thinking about how they may be using AI, right? It trickles down if you are trying to encourage responsible use and trying to understand how the insurers are using AI, that communication and correspondence and being transparent about how that's going to affect the policy could really help on both levels.
Tom Kinney: It's a great example, Brett. I mean, you and I both know, we get regular communications from our professional liability insurer. “Hey, make sure if you're using AI, you're not falling into this trap.” That proactive messaging is something that we historically have counseled our clients to do. Look, be proactive about whether it's, hey, hurricane season's coming, make sure that your roof repairs are locked down. Or, hey, there's been a rash of cyber-attacks, make sure you're doing phishing training. That same should extend to AI. Hey, we know that maybe we didn't write this. Maybe you didn't even write that cover for them, but it doesn't cost you much in the long run to send an email with best practices, or to offer to provide them with access to training to mitigate potential future catastrophic losses.
Brett Mason: Absolutely. Again, a theme that I've seen in doing this podcast is that tension and that push and pull between those in the tech space that want to keep advancing the technology and those of us in the legal space who are trying to keep it from running afoul of all the myriad of regulations and laws and contracts. I think those proactive communications can help try to bring those two sides of the coin together in a way that we can use AI technology, leverage it, use the efficiencies, but also, not get ourselves into trouble.
Tom Kinney: 100%.
Brett Mason: Well, Tom, thanks so much for being on. I appreciate your insights on the things you're counseling your clients and the insurance and reinsurance industry around artificial intelligence. Thanks so much to our listeners. Please don't hesitate to reach out to me at brett.mason@troutman.com, with any questions, comments, or topic suggestions. You can also subscribe and listen to other Troutman Pepper Locke podcasts wherever you listen to podcasts, including Apple, Google, and Spotify. Until next time, thanks, everyone.
Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.