Change often is at odds with stability, and new advances in Artificial Intelligence (AI) are shaking things up.
As many of you know by now, on November 30, 2022, OpenAI released a chatbot called ChatGPT.
ChatGPT is an example of generative AI, an algorithm that can absorb vast amounts of information and create original content from a user that gives the algorithm a prompt. As with all tools, ChatGPT is only as good as its user — therefore one can only expect output as good as the user inputs. A human can have human-like conversations with ChatGPT, they can ask it to compose emails and even entire essays. This tool has many uses still coming to the forefront, but blind reliance on a tool like ChatGPT may lead to costly mistakes.
Reliance on ChatGPT in the workplace may create quality issues, privacy breaches, copyright infringement, and more. It’s your duty as an employer to make sure these liabilities are addressed in company policy and through trainings.
First, ChatGPT has been known to hallucinate answers that are seemingly convincing but are actually wrong, said one Morgan Stanley analyst. The tool draws information from sources that may not be accurate or up to date themselves. As a result, then, ensure that your employees are not using ChatGPT for sensitive tasks where accuracy is important, since the chatbot may create a false or faulty sense of security from its well-written answers.
Second, you may have employees using ChatGPT to accomplish a wide array of tasks. It could be for creation of text, searching for sources, or compiling data. The employee may be inputting sensitive company data – and even confidential information – into the AI tool. Does this breach any contracts you have with clients? Can you guarantee that your data is protected in these programs? Employers need to be mindful of this risk and can reduce liability by adopting policies restricting or possibly even banning the use of AI tools like ChatGPT.
Third, if ChatGPT produces work that is too similar or copied from the existing sources it may find, you may find yourself on the wrong end of a copyright infringement complaint. The legal impact of using AI as a sort of middleman for sources and content creation remains to be seen; however, many universities have banned or warned students of AI use and plagiarism.
Be sure that your workforce understands these risks through training sessions to inform and protect your organization. Be proactive in adopting policies to limit liability while the technology develops. Further, be aware that government regulations will not be relaxed for the mistakes made along the way while using AI.
Federal agencies responsible for enforcing civil rights, non-discrimination, fair competition, consumer protection, and other important legal protections issued a joint statement that the “existing legal authorities apply to the use of automated systems and innovative new technologies just as they apply to other practices.” The releases warned that automated systems can be “skewed by unrepresented or imbalanced datasets” and lead to discriminatory outcomes for which you’ll still be liable. The automated systems also lack transparency for their internal workings which makes it impossible for us to know if they are fair. Further still, developers may not understand the public and private uses for which people may attempt to use chatbots, therefore leaving great uncertainty about whether the tools are compliant with the many State and federal regulations by which employers must abide.
In light of these advances, have you determined if your company is prepared to address liabilities related to AI? Should you have any training or policy development needs, the attorneys at The Coppola Firm are ready to address your needs and tailor policies, procedures and best practices to your business.