HR Alert: More On Artificial Intelligence in the Workplace

Check out our blog. We cover everything from car accidents to employment law and other hot legal topics.

Start the Conversation Today

Almost every day there seems to be a new headline involving advances or issues associated with artificial intelligence (AI). With AI technologies becoming more prevalent in the workplace, there’s an increased need for the development of workplace policies that regulate its use.  AI is here to stay, so it’s in an employer’s best interest to acknowledge and manage the use of AI rather than to ignore it. To be sure, it’s not going away.

These are the common issues we’re seeing with the use of AI in the workplace. AI is constantly evolving, so courts and legislators are playing catch-up when it comes to regulating it.  Still, we’re fortunate that the trials and errors of others have begun to teach us ways to responsibly manage and use AI in the workplace.

Bias & Algorithmic Decision Making. Although AI tools can be used to ease decision-making, for example, whether to hire or promote employees, employers can’t blindly rely on algorithms.  Since AI software is developed by humans, those developers’ biases have been identified in decisions made by AI.

As we recently reported, the United States Equal Employment Opportunity Commission (EEOC) has addressed the use of algorithms and other AI tools during employee selection processes.  While it didn’t substantive changes to existing federal law, it clarified that Title VII of the Civil Rights Act of 1964 (Title VII) applies to the use of these decision-making tools.  In other words, an employer can run into legal trouble under Title VII if an AI tool it’s using has a disparate or adverse impact on a member of a protected class.

States and local governments are also recognizing the potential for unlawful discrimination through AI bias. In response to the prevalence of these biases, lawmakers are drafting and beginning to enforce legislation that regulates the use of AI when it comes to workplace decisions. For example, a New York City ordinance that’s going into effect in July 2023 will require companies to notify candidates that AI is being used in the hiring process and must allow candidates to opt out of AI being used to analyze their applications. Additionally, the ordinance requires companies to conduct an independent technology audit to check for bias before using an AI tool and then conduct a yearly audit going forward. We can expect the NYC ordinance to be a catalyst for the creation of similar legislation elsewhere.

Privacy Concerns.  AI technologies rely on the collection and analysis of data that’s inputted into them. The inputting and storing of this data beg these critical questions:  who has access to this information and how secure is it? Whereas some AI tools respond to the text prompts of their user, others rely on biometric data to perform their duties. For example, facial recognition technology (FRT) uses AI to recognize individuals based on their facial features. While FRT may be a useful way to secure a company’s information, companies must employ safeguards to protect the data storage, because unauthorized access will result in serious security risks.

We’ve all likely watched a video of a politician saying something so absurd we can’t believe they’d say something like that! Well, as you may know, they likely didn’t. Instead, we were fooled by a so-called deep fake.

Wrongful use of data including facial scans and voice recordings are resulting in improvements to deep fakes which can cause risky situations – whether it’s an impersonation of a national politician or quite possibly your loved one. Moreover, deep fake technology is actually being used to fool FRT into unlocking secure accounts and devices for the wrong people, exposing this formerly-protected information.

Confidential information should never be inputted into AI programs. As we already know, AI is continuously developing which requires the involvement of tech developers on the backends of these programs. This means that the information we’re inputting likely is being viewed by others without our knowledge or [informed] consent.

Moreover, the information inputted into AI programs is vulnerable to malicious attacks.  Did you know that ChatGPT experienced a data breach in May 2023 that leaked conversations of its users? Although the breach was minor, this is a prime example of the way that inputted information may become visible to unauthorized viewers. No doubt hackers are continuing to look for vulnerabilities in AI software to prey on unsuspecting victims.

The bottom line is confidential information should never be entered into AI programs.

Surveillance and Monitoring. FRT is useful for adding an extra layer of security to accounts and devices. As a result, businesses and government offices have begun to implement FRT into their surveillance plans for their sites. This may help keep track of who’s somewhere and when, but there are ethical and legal issues presented by the use of this technology, particularly in New York State. Oftentimes FRT is utilized by a business without the individuals’ consent or even knowledge, and this can violate the law. If you’re using FRT, be sure to check for compliance first.

State and local governments are also continuing to try to find a balance between using the advancements offered by FRT while also protecting an individual’s right to privacy. For instance, in 2020, New York State banned the use of FRT in schools. New York City lawmakers have proposed a ban on the City’s use of FRT. Other cities across the country have banned the use of FRT by police departments.

Not only does FRT create privacy concerns, but there’s also a risk of inaccuracy that can have detrimental effects on one’s livelihood. The misidentification of an individual can result in wrongful accusations with long-lasting effects. For instance, inaccurate FRT identifications resulted in the wrongful arrests of three Black men. Discriminatory biases are prevalent within FRT, with the poorest accuracy of identifications consistently being found in subjects who are young Black females.

To be sure, the use of FRT is a controversial topic. To use FRT in the workplace, employers should ensure their use is current with federal, New York State, and local laws and should notify their employees and receive consent.

Copyright Infringement.  AI presents copyright issues concerning both the use of copyrighted materials by AI software as well as the copyright-ability of AI generated materials. AI tools are trained to generate outputs through exposure to existing works, some of which may be subject to copyright protection. This process has created a debate between copyright owners and AI companies about whether the training and creating of outputs is infringing on already-existing copyrights.  Under federal law, copyright owners may be able to show that an output infringed on their copyright if (1) the program had access to their work and (2) generated an output that is “substantially similar.” In addition to the challenge of proving these two things, there also is the question of who’s liable for the AI’s infringement. Under current case law, both the AI company and the user of the AI may potentially be liable for infringement.

On the other side of the coin, we’re seeing developers and AI users attempting to copyright AI-generated works. The U.S. Copyright Office only affords copyright protection to works “created by a human being.”  Courts have upheld the requirement for human involvement in the creation of the work; however, there’s a pending lawsuit that’s challenging the human-authorship requirement.  In February 2023, the U.S. Copyright Office rejected the argument that there’s human authorship when a program generates a work in response to textual prompts. In addition to this decision, the Copyright Office released guidance on the copyrightability of AI materials which clarifies that when the AI tool generates the expressive output these works won’t be subject to copyright protection. But works that are authored by a human and include AI-generated materials may be afforded protection.

AI certainly is creating complexities for intellectual property law.  It’s likely there will be a great deal of legal proceedings in the future regarding the copyright questions presented by AI.

Terms & Conditions.  It’s important to read and understand the terms and conditions of the AI tools that are being used in your workplace. Although this is a tedious task, it’s crucial to ensure that the use of AI complies with the requirements set out by the AI owner. For example, the terms and conditions may explain who the owns the output while also conditioning the ownership of the work product on compliance with other conditions. You need to know and understand what they are.

Blind Reliance.  There’s no doubt that AI creates convenience for the workplace. At the same time, it’s clear that the use of AI tools to make employment-related decisions may expose employers to liability for discriminatory practices. Moreover, the use of FRT presents a risk of inaccurate identification of individuals that can result in awful outcomes. We’ve also observed how AI generated materials may result in inadvertent copyright infringement by the user.

As a result, then, you should not blindly rely on the results of an AI tool.

Although AI tools may be used to complete tasks in the workplace, employers are responsible for ensuring these tools are being used lawfully. For instance, employers can be held liable for a decision-making tool’s disparate impact even if the employer was unaware of the tool’s bias.

AI programs like ChatGPT and others also are known to hallucinate answers.  These programs’ knowledge is only as current and accurate as the developers behind the scenes train the program to be. Yet, ChatGPT will oftentimes generate an answer even if it doesn’t have one.  These made-up answers come across as convincingly correct. This serves as a warning to those who use AI to take responsibility for, among other things, ensuring that their work product is accurate.

The Coppola Firm understands the risks and advantages to using AI in the workplace and will help you craft effective and practical policies and approaches to ensure your business remains in compliance with the law. Contact us at your convenience, and we’ll be happy to assist you.

Lisa Coppola

Written by Lisa Coppola

Founder of The Coppola Firm

Lisa A. Coppola, Esq. understands the challenges her clients face, whether they’re starting a new business, taking their existing operations in a new direction, or facing a claim or threat.

Blog Categories


Speak With the Lawyers at The Coppola Firm

NAICS Code: 541100

© The Coppola Firm
Attorney Advertising. Prior results do not guarantee a future outcome.

Call Us Now Message Us