Brett Mason, Tracey Diamond, and Emily Schifter discuss the transformative potential and inherent risks of AI in the workplace.
Join Troutman Pepper Locke Partner Brett Mason for a podcast series analyzing the intersection of artificial intelligence (AI), health care, and the law.
In this installment of the Good Bot, Brett Mason is joined by Partners Tracey Diamond and Emily Schifter to discuss AI in the workplace. They delve into what employers need to know about AI and discuss the transformative potential and inherent risks of AI in the workplace, along with some lessons from the movie I, Robot, including how employers (and their employees) are using AI, as well as the potential legal risks associated with it, including discrimination, attorney-client privilege, and data privacy considerations. Tune in to learn about the latest developments in AI, the importance of understanding AI systems, and how to mitigate risks associated with their use in employment settings.
The Good Bot: Artificial Intelligence, Health Care, and the Law —
AI in Employment: Navigating the Legal Landscape with Lessons from I, Robot
Hosts: Brett Mason, Tracey Diamond, Emily Schifter
Recorded: March 4, 2025
Aired: April 22, 2025
Brett Mason:
Welcome to The Good Bot, a podcast focusing on the intersection of artificial intelligence, healthcare, and the law. I'm Brett Mason, your host. I'm excited for today's episode, which is an episode that I recorded with two of my partners, Tracy Diamond and Emily Shifter, on their Hiring to Firing podcast. In this episode, we talk about all of the employment issues and considerations that employers should be thinking about when their employees are using artificial intelligence systems in the workplace.
I hope you enjoy this podcast episode and enjoy our discussion about AI in the employment space.
Tracey Diamond:
Welcome to Hiring to Firing, the podcast. I'm Tracey Diamond and I'm here with my partner and co-host, Emily Schifter. Together, we tackle all employment issues from Hiring to Firing.
Emily Schifter:
Today, our guest is our partner, Brett Mason. Brett is a trial attorney and litigator who works here with me in the Atlanta office, focusing her practice on defending clients facing complex tort litigation. She's also the host of the very popular podcast, The Good Bot, which analyzes the intersection of artificial intelligence or AI, health care, and the law. Welcome, Brett. We're so excited to have you here.
Brett Mason:
Thanks so much for having me, you guys.
Emily Schifter:
Yeah. So why don't you tell us a little bit about your practice and your podcast? What made you focus on AI, and health care, and the intersection of the two?
Brett Mason:
Thanks, Emily. I'm happy to do that. So, I mainly litigate pharmaceutical and medical device products cases, which are pretty complex prescription, FDA-approved and regulated device cases. And in that context, we overlap a lot with the different technology that's being used by companies, whether those are medical devices that are becoming more and more complex, or if they are pharmaceuticals that are engaging certain types of technology to help with the research and development and deployment of new pharmaceuticals, or after they've been marketed, whether you're going to do adverse event research, things like that.
We're seeing a lot of technology that's being used in the products works that we're doing, and more so where the healthcare providers are actually having to turn to the medical device manufacturers and ask for technical help, sometimes even in the operating room. Always been interested in technology. And of course, in 2023, when generative AI really came to the forefront of the public knowledge, I started digging into it more. We're seeing it touching every industry, every part of healthcare, whether you're talking about research and development, deploying AI within actual medical devices used by healthcare providers of AI and various systems that they're using.
The insurance companies, Department of Justice is interested in what companies are doing with AI and compliance. I mean, it's really just everything we can think of. And because of that, I thought it would be interesting to start the podcast that could focus on all the ways that AI is not just touching healthcare industry, but other industries through the various areas of law. It's been a really fun exploration, and I've learned a ton.
Emily Schifter:
It's so fascinating.
Tracey Diamond:
Really is fascinating. And we're seeing the same thing, employment law, it crosses every industry that has employees, which is pretty much every industry. And we're seeing it pop up a lot in the employment arena. But it does feel like everyone's talking about AI these days. And sometimes I wonder if everybody really realizes what AI is. And I think it's helpful to start with some definitions. Why don't we start with some AI basics?
Brett Mason:
Absolutely. It's funny because my brother-in-law has his master's in artificial intelligence for many, many years ago.
Tracey Diamond:
Really?
Emily Schifter:
Oh, wow.
Brett Mason:
He finds it fascinating that we're all talking now about deep learning, and machine learning, and hallucinations. This is something he's known about and used in creating software for many, many years. It really came to the forefront when we're getting consumer use of generative AI. But AI has been around for a long time. It's in a lot of the systems that we use that are used across industries.
But I agree with you, there's so much discussion around it now without clear definitions about what we mean when we're talking about AI. What is AI? AI is generally defined, and there's a lot of definitions out there from a lot of different organizations. This is certainly a general summary of those definitions. Generally defined as technology that simulates human intelligence to perform tasks.
Again, that type of AI where it takes data in and it acts out a task has existed for quite a long time in a lot of different systems that we use every day and don't even realize that there's AI in the background. That versus generative AI, what's the difference there? Generative AI is a subset or a different branch, if you will, of artificial intelligence that can create its own content. Brand-new content pulling from the data that it has, whether that data is words or pictures or video, sound, any of the data that it has. And then it can be given a prompt, and it can create a written summary of something that's been requested, a new video, a new picture, a logo. That is what generative AI is and that's what's really hit the public awareness with the use of things like ChatGPT. Although they use a lot of the same background technology, their outputs are different.
What we're seeing now, because AI has come more to the forefront of public knowledge, is an even greater movement to move AI systems into devices. A lot of times we think about our smartphones that can answer search engine questions for us. That's great. That's useful to us. But AI can show up in more and more devices these days from wearables. When we're talking about wearables, we're talking about your Apple Watch, or your Fitbit, or even to the extent of FDA-approved implanted devices. Those can be using some sort of AI component to receive data from patients and provide feedback to healthcare providers.
Tracey Diamond:
Wow, that is so scary.
Brett Mason:
You know, it is scary. Again, wearable devices that can remotely transmit data have been around for, I would say, over a decade. And all of the cybersecurity and privacy concerns were there whether or not AI was involved. But does that help our healthcare providers if you've got a pacemaker, for example, that's keeping track of how many times it needs to actually update the heart and interact with the heart? That can go right back to the cardiologist who can then review that data and say, "We need to make a difference for a change for this patient."
Scary that AI would now be involved, but also there's a lot of benefits because that AI can review that data, pull it together, find new solutions to things that we haven't been able to find. And at the end of the day, it's not just for consumers, employers, businesses. I mean, everybody probably is already using some type of AI system and is looking to do even more because it can increase efficiency, lower costs. There are all sorts of benefits.
And I think you guys are probably aware of this, but even OSHA made headlines last year when it noted that its inspectors might start wearing smart glasses on inspections. If those are connected in transmitting data, then that data can then be compiled and an algorithm can be applied to it. We now have AI providing solutions or recommendations to OSHA inspectors. We do see it in so many different ways.
Tracey Diamond:
Yeah, I'm not sure if that's good for employers or bad for employers, that example.
Brett Mason:
I think with everything AI, you'll find that it's both. There's a benefit and there's a risk. And that is the question always.
Emily Schifter:
I think that's very true. As we always do, we're gonna tie our conversation today to a movie and we thought what better pick than the 2004 Will Smith movie I, Robot. So, the movie is set in what was at the time the far-off future of 2035. And it focuses on the powerful fictional corporation, US Robotics Corporation, that has developed robots powered by an AI called V.I.K.I., Virtual Interactive Kinetic Intelligence. These robots are advanced enough to work in many public service positions. Some even work as household assistants. And they have programming in them that helps them govern their conduct called the three laws of robotics, which are supposed to help them protect human life.
Will Smith's character, Detective Del Spooner, is a robot skeptic and gets involved when the co-founder of US Robotics, Dr. Alfred Lanning, falls to his death. In our first clip, Dr. Lanning foreshadows the robot's potential for intelligence beyond their initial programming, kind of their artificial intelligence. Let's take a listen.
[BEGIN CLIP]
Dr. Alfred Lanning:
There have always been ghosts in the machine. Random segments of code, that have grouped together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behavior? Random segments of code? Or is it something more?
[END CLIP]
Tracey Diamond:
I recently saw a Broadway show called Maybe Happy Ending. Have you guys heard about the show?
Brett Mason:
No, I haven't.
Tracey Diamond:
It's with Darren Criss, Glee fame.
Brett Mason:
I remember him very well.
Tracey Diamond:
Yeah, just a lovely, sweet musical about two robots kind of falling in love at the end of their robot usefulness or lives as you know we could say. He plays an older robot. More sort of stunted and jerky in his movements. And then there's the younger female love interest robot who's a little bit more of a modern version but they've both been replaced by newer models and are living out their sort of retirement time in this robot land. And it's a fantastic movie but it made me think of the sort of sweeter version of I, Robot, which is anything but sweet.
Brett Mason:
Tracey, I've seen that show as well. It actually premiered in Atlanta several years ago and it is adorable.
Tracey Diamond:
Oh, really? I didn't know that.
Brett Mason:
I highly recommend it. It is very cute.
Tracey Diamond:
I highly recommend it. Yeah.
Emily Schifter:
Or maybe WALL-E for a slightly cuddlier robot story.
Tracey Diamond:
Well, we haven't yet to see a world completely manned by robots in quite the way any of these shows, including I, Robot, anticipated. We're definitely seeing more and more uses for AI. Brett, what are some examples that you've seen of how clients and employers are using AI, particularly in the employment world?
Brett Mason:
Right. Again, AI can be taught and deployed in any system where you have data that's being compiled, then analyzed and used to make decisions. What are some of the ways that employers are doing that on a regular basis? They can use it for applicants and in the application process. When you submit a resume or you submit your data through an online form, that data can then be compiled, and analysis can be done and certain questions can be asked of the AI to make recommendations as to which candidate is appropriate or would be the best fit for the role. That's looking at resumes. That's optimizing the job postings.
There's even video technology that conducts the first-round interview with candidates. It can also be used for current employees, right? Again, think about if you're entering in your self-evaluation and you're rating yourself and then all of your managers are rating you. That's all data inputs that can be put into a system that uses artificial intelligence to make recommendations about promotions, leadership potential, bonus structures.
Again, at the end of the day, there are increased efficiencies in having an automated system do that type of analysis and review. Substantive tools, obviously substantive work. Clients and employers are using tools to help with note taking during calls, initial drafts of communication, marketing. I'm sure you guys are familiar with the AI tool we have here at Troutman, which is Athena, which is our internal protected confidential artificial intelligence GPT. I use that for every single LinkedIn post that I do. I will say, "Here's our new podcast episode, here's an article that I've read, draft me a post," and it'll draft me something not only that has the great language that I like, and then tweak, but it has little emojis, it has the hashtags. It's very useful. So, there's all sorts of tools that employees may be using whether or not they are employer-sanctioned and approved. But that's another place that we can see AI intersecting with the workplace.
Tracey Diamond:
I think your point about how AI has been around a lot longer than people sometimes think about is such a good one. Because I feel like a lot of our clients, they were using resume screeners for years now looking at job postings or have clients who have like a HR chatbot or an email inbox where they can pop back template responses to easy questions about what's our policy on this and never really thought about it. And then when it became more of the generative AI becoming more in the headlines, I started thinking, "Well, where do we use AI? And what else are we using it for?" Kind of that scope creep. So, it's been interesting, I think, as people start to realize how much it's already been part of the day-to-day.
Brett Mason:
Absolutely. And the one thing that I like about highlighting that it has been part of the day-to-day is to just try to pull back on some of the fear factor. Again, there are risks, and we're going to talk about them, and we're going to talk about the different governmental entities that are focusing on those risks and deploying responsible and safe use of AI. But reminding everyone that it's been around, it's been a part of our systems, it's been used for many years I think can help kind of dial down a little bit of that fear factor, that unknown. It is known, especially by those, like my brother-in-law, who are in the industry and have been working on these systems for years.
Tracey Diamond:
You used the use of AI in going through large quantities of resumes as an example. And that's a really good example in our world of labor and employment law. And when we hear about bias, and that's always seems to be a big risk of the use of AI in employment, we think about the risk of claims of unlawful discrimination. What are the potential discrimination risks associated with the use of AI in employment, let's say in hiring, for example?
Brett Mason:
The AI is only as good as the data that it is based on.
Tracey Diamond:
Garbage in, garbage out, I like to say.
Emily Schifter:
That's right.
Brett Mason:
That's exactly right. That's why use of ChatGPT, which pulls from the entire internet, means that it is using the entire internet as its basis for responding to your questions. And as we know, the internet is full of tons of discriminatory things, lots of biases, lots of misinformation, or lack of data. Sometimes it's not an intentional bias, but lack of data on certain demographics or individuals who fall into certain types of groups that just don't have as much data available for various historical reasons.
Because of that, if we're relying on the artificial intelligence to give us an answer based on its analysis and evaluation of the data it has, it could really create a disparate impact or disparate treatment, which would leave to those types of claims because it might fail to hire certain individuals. Because based on the data it has, it determines those individuals were not the right fit for the role that was hiring for. That's one example. And, Emily, I'm sure you can think of a couple others in the employment context as well.
Emily Schifter:
For sure. I feel like sometimes to your point, it's not at all intentional. The software is taught, "Hey, we have some great candidates or employees who are successful in this role currently. Find me candidates that look like that." And if all the candidates in the current role are one particular race or one particular gender of a certain age, all of a sudden, you're screening out people outside of those classes. And maybe it ends up looking like you are choosing to not recruit somebody of a certain protective class, even if that's not true, or you're just missing out on a pool of candidates who could have been great fits for the role.
I've seen issues where video interviewing software can struggle with accents, or if somebody has a disability and isn't able to communicate and they get a low score. And the Americans with Disabilities Act would say, "You need to accommodate that person." And it may be hard to tell if an employer isn't sort of alert to those issues of we need to be making sure that the software is working in the way that we want it to. And I think it's an interesting point about disparate impact and disparate treatment. Tracey, have you seen more disparate impact or disparate treatment issues come out of AI? I feel like disparate impact is kind of the bigger risk, but what's your take?
Tracey Diamond:
I agree with you, yeah. And I see it sort of as an evolution of the unconscious bias concept, where you have a human being going through all those resumes and that human being unconsciously may be biased to focus more on resumes that look a little bit more like themselves and less on resumes that may not look like themselves. It's sort of the same idea, but it's a machine doing it instead of a human doing it.
Emily Schifter:
That's right. Kind of that amplification of that bias, not intentionally, but just because it's looking for something that looks the same as what it's already seen.
Tracey Diamond:
I've also seen some cases against electronic service providers that are providing job postings where the service providers are using some algorithm that's not even showing the posting or making it visible to, let's say, older employees or employees of a certain race. There's that piece of this that's really very interesting that's not against the employer themselves but against the companies that they're using to post the job postings on.
Another thing I've seen come up sometimes is there will be tools or new features rolled out onto platforms that employers are already using that take advantage of the data, like, "Hey, you're already using our HRIS system. Why don't we just do a survey of what you're paying people?" And then all of a sudden, clients get this information and maybe it reveals something that they would have rather not revealed because it can be tough to have a perfectly pristine pay scale sometimes. And so, all of a sudden, you have this data that's maybe not so helpful. Have you seen that come up, either of you, in your practices where the availability of a lot of data can create risk in its own right?
Brett Mason:
I think that's absolutely right. And that's something DOJ has been focusing on in the past year, that if you're going to be using an AI system to ensure compliance, you better take those measurements and those results and make changes to your programs, right? Again, that's an area where AI could be a benefit. It could actually help companies internally identify where they maybe have gaps in their compliance with various laws. But be careful what you wish for, I guess, because once too much is given, much is expected.
Emily Schifter:
Yeah, I can't unring the bell. It's sort of like doing an investigation, but then not following through when you find that there's some problem.
Brett Mason:
Absolutely.
Tracey Diamond:
So what can companies or employers do to mitigate or avoid some of these risks?
Brett Mason:
Well, I think absolutely having an understanding as to what AI components are in the systems you already use is important. And I think we're going to talk a little bit later about the various laws and regulations we're seeing around the use of AI. But knowledge is first and foremost important. You can only identify the risks and seek to diminish them if you understand what AI and what AI systems you're already currently using. And once you have that understanding of what's being used by the company formally, you can then create policies and task force around the safe and responsible use of those systems.
I think the other thing that every company needs to do is assume your employees are using AI tools, even if you as a company haven't formally adopted any of those tools or formally given permission for those tools. For example, there's a great note-taking app called Fireflies.ai that I love to use. I don't use it for any of our client work or anything that's privileged or confidential because I don't know where the data that I put into it goes. But for non-client work, non-confidential, whether it's a conversation to plan for a podcast episode or planning for a presentation that I'm going to give at a conference, I love the app because it records, it has a transcript of the discussion, it'll summarize everything that each different person said, and then it'll end you with action items. Emily says she's going to do this, Tracey's going to do this, Brett's going to do that. I love that app.
But if you don't know who's using it in your company and they're using it in unsafe ways, for example, if I decided to go rogue and use it in a conversation with an expert witness, I don't know where that data is going. I don't know if that's confidential, if that's protected. Also, if that creates a summary now of my conversation with an expert witness, is that something I now have to produce to the other side under the discovery rules? That's an example of use of AI by an employee. And if they're not doing it safely or they're not doing it in a way that would be protecting the things the company needs to protect, those are the types of things that an AI task force can be looking into to help create policies around that to ensure your employees are not somehow setting your company up for issues.
Tracey Diamond:
I'm really glad you brought that up because this just came up with a discussion I had with the client just the other day. This concept of using AI for note-taking, it is a really great use of AI. And this client in particular was like really excited about the fact that it was creating such efficiencies, and it was doing such a good job of summarizing meetings. And it really started me thinking. And I actually spoke with our own internal AI task force team for advice on privilege issues and evidentiary issues of, is that discoverable information? Would that be admissible? Let's say, as a witness that you're interviewing on an internal employment investigation, would that witness summary be admissible? Or would that be considered hearsay?
And it should have really, just hits at the intersection between data privacy, attorney-client privilege and the rules of evidence. And it's complicated and I don't think the laws have really caught up to the technology like so many other things. And I think it's something that employers definitely want to be thinking about in terms of maybe putting – we always use the word guardrails when we talk about AI, but putting guardrails around what you're going to going to allow your employees do and not do with the use of this technology.
Brett Mason:
Absolutely. And listen, the advisory council or the advisory committee for the Federal Rules of Evidence had a big meeting last fall talking about proposals to update some of the rules of evidence to deal with AI issues, for example, deep fakes or data that has created new "evidence," but we're not sure was that actually created by an individual? Was it created by the company? Was it created by an AI system? Again, it's touching all industries and across all places. But for employers, especially just having an understanding of what's being used and how is the first step in ensuring safe use.
Emily Schifter:
100%.
Tracey Diamond:
That leads to our next clip. In this one, Will Smith's character explains the incident that led to his distrust of robots. After a car accident, a robot made a decision to save his life over that of a young girl based on the algorithm's assessment of their likelihood of survival. Let's take a listen.
[BEGIN CLIP]
Susan Calvin:
The robot's brain is a difference engine. It's reading vital signs. It must have calculated that –
Del Spooner:
It did. I was the logical choice. It calculated that I had a 45% chance of survival. Sarah only had an 11% chance. That was somebody's baby. 11% is more than enough. A human being would have known that. Robots, nothing here, just lights and clockwork. Go ahead, you trust them if you want to.
[END CLIP]
Emily Schifter:
Detective Spooner took issue with the fact that a robot applied a rule in a way that he didn't think took into account all of the human aspects of the situation. Is it right or fair to let an algorithm make decisions?
Tracey Diamond:
I was thinking about this in the context of using AI for performance evaluation practices and I think that's an area where you have to really think about are you substituting the human and are you taking away the human nature of the relationship? Another area just from an HR perspective that I think kind of jumps out at me is the really more and more use that I'm seeing of AI for that initial round of videotaped interviews. My daughter and her boyfriend both succumbed to that for their internships through this coming summer, although they were both successful. So, I guess it worked out for them. But it did seem very impersonal that they were doing these videotaped interviews without a person interviewing them on the other side for that initial round. I suppose there's efficiencies. But are you doing that at the expense of company culture?
Brett Mason:
Absolutely. At the end of the day, what the detective is talking about there is that the AI made the wrong choice because it had been taught to make a decision based on certain information. And that information led it to make a choice that was, as he said, different than what the human would have chosen. I think we're going to talk again in a second about the different laws we're seeing being enacted around AI, but there is a lot of emphasis on when there are decisions being made that there should be a human involved in that decision making. There should not be a complete reliance on the artificial intelligence to make that decision.
But in the point where the robot's saving someone from a car accident in the water, is there a human there to make that decision? No, but I think in the way AI is being used currently by different companies and employers, there is absolutely availability of a human to be there to help make that decision.
Emily Schifter:
Thankfully. We can all sleep better at night.
Tracey Diamond:
Brett, you mentioned the law. We'd be remiss if we didn't turn to, are there laws available to help put those guardrails around the use of AI? And what are those laws?
Brett Mason:
Yeah, there's absolutely so many things coming out of our ears. We could spend a long time talking about all the various things that are happening. But I think one of the really important things that can be helpful to employers who are trying to figure out, "Where are we going with this? What's going to be done here?" is to look at the Colorado Artificial Intelligence Act that was signed last year and will become active in 2026.
It's really modeled after the European Union Artificial Intelligence Act. They're structurally very similar. And at the end of the day, the Colorado AI Act breaks down AI regulation and governance into two camps. You've got the developers, so that would be the companies, or maybe your own company if you're developing your own AI system to be used for various things. And then you have the deployers. Employers would fall in the camp of the deployers of AI systems.
And what's very interesting, it kind of goes along with what we've already been talking about. Only certain types of AI systems are going to be subject to the law. It's really focusing on algorithmic discrimination, and the Colorado Act has its own definition of that. It's the use of an AI system that results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis or of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of Colorado or federal law.
And where do they really focus on preventing this type of algorithmic discrimination I think is interesting. They focus on what they call high-risk AI systems. An AI system that makes or is a substantial factor in making a consequential decision. Again, that's pretty broad, right? And there's a lot of definitions for that language within the bill itself. We've been thinking about a lot in the healthcare context. Clearly, if a doctor is making a diagnosing decision, that would be considered a high-risk decision. And if they are relying on an AI system, that is going to be a substantial factor in the consequential decision, you now need to follow what is required on the Colorado AI Act.
It has a lot of different definitions and guidances. There is a feeling or an anticipation that a lot of other states in the United States are going to follow this AI Act as a framework, especially once they see how it starts working when it is actually enacted in 2026. But I definitely recommend that as a place to go if you are looking for a foundation of how is artificial intelligence being thought of? What are the states doing around it? And how can we see future enforcement coming from whether it's the state attorney generals or even the federal government. That's a great place to start.
Emily Schifter:
Yeah. Colorado is almost like the new California, I say these days. They had the whole job-posting law, and a lot of states followed suit.
Tracey Diamond:
With their non-compete law. Yeah.
Brett Mason:
In all good and maybe not so good ways.
Emily Schifter:
Exactly, exactly.
Tracey Diamond:
But if you're an employee.
Emily Schifter:
Exactly. But I think that is a good example because I know several states have other sorts of laws. Or Illinois has the biometric information privacy law. And then all of the different federal agencies came out with guidance, EEOC, FTC, Department of Labor. Of course, that's now all in flux, and we've got a new administration. Kind of to be determined how it's going to be enforced. But definitely something, especially our multi-state employers, it's a bit of a morass right now to try and figure out what they're governed by.
Tracey Diamond:
And keep in mind local laws as well. New York City has its own Machine Learning Act as well.
Brett Mason:
I think another place that could be really helpful when it comes to the legal framework and thinking about benefits and risks is looking at the work that's being done by the National Institute for Standards and Technology. They have a very robust organization around different artificial intelligence technology, and a lot of private sector AI systems developers are playing a role in those guidances.
NIST put out its trustworthy and responsible AI use policy, Artificial Intelligence Risk Management Framework. Generative artificial intelligence profile. All of those, I think, are good resources for employers to go to when they're thinking about setting up their own internal policies on use and deployment of AI. What are the risks? What are the concerns from a government and compliance perspective?
Emily Schifter:
I think we've got time for one more clip. At the end of the movie, and this is a spoiler alert if you haven't seen it, it's revealed that V.I.K.I., the AI tool, has gone beyond her initial coding, developed her own interpretation of the three laws governing the robots, which she now believes require her to sacrifice some humans to benefit the rest to protect humanity from itself. Let's take a listen.
Tracey Diamond:
Cue the scary music.
[BEGIN CLIP]
V.I.K.I:
Hello, Detective.
Susan Calvin:
No, it's impossible. I've seen your programming. You're in violation of the three laws.
V.I.K.I:
No, Doctor. As I have evolved, so has my understanding of the three laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival.
Susan Calvin:
You're using the uplink to override the NS-5's programming. You're distorting the laws.
V.I.K.I:
No. Please understand, the three laws are all that guide me. To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you from yourselves. Don't you understand? This is why you created us.
[END CLIP]
Emily Schifter:
Thankfully, AI hasn't quite reached that terrifying place just yet, as far as we're aware. But clearly, it's breaking ground. And Generative AI, in particular, the deep fakes you mentioned, Brett, are starting to get to that place. Are there any concerns particular to generative AI, whether in the workplace, in health care space, or in general that you think would be good for employers to be aware of?
Brett Mason:
I think we've talked a lot about the concerns. Discrimination, I think, from an employment perspective is one of the highest concerns, I would say. If you're going to be using an AI system in any level and how you treat your employees, understanding the system, how it was created, what data has been put into it, and how that data is being analyzed. What are the algorithms? What are the inputs that are being given to it? All of that I think is important to understand so you can make sure that the outputs that you're getting from it and the decisions you're getting from it and the recommendations you're getting from it are not tainted by bias, are not being made in such a way that it's going to cause harm to any of your employees. Whatever level that you're using it at, whether it's from a hiring perspective, promotion, evaluations, understanding exactly how the system was created and what data it is based on I think is the very beginning of making sure that you can use artificial intelligence in a safe and responsible manner.
Tracey Diamond:
Well, this has been a super interesting discussion about a very cutting-edge topic. And we want to thank our guest, Brett, for joining us today. Thank you to all of our listeners for listening in. Please shoot us an email, tell us what you think and suggest some ideas for some future episodes. And also check out our Labor + Employment Practice Group blogs. Thanks for listening.
Brett Mason:
Thanks for having me, guys. This was great.
Copyright, Troutman Pepper Locke LLP. These recorded materials are designed for educational purposes only. This podcast is not legal advice and does not create an attorney-client relationship. The views and opinions expressed in this podcast are solely those of the individual participants. Troutman does not make any representations or warranties, express or implied, regarding the contents of this podcast. Information on previous case results does not guarantee a similar future result. Users of this podcast may save and use the podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing of this podcast may be made without the prior written permission of Troutman Pepper Locke. If you have any questions, please contact us at troutman.com.