With this week’s news that a lawyer litigating in the district court for the Southern District of New York has been called on the carpet for using AI to write a brief – that contained case citations and quotations that literally do not exist – a federal court judge in Texas, Judge Brantley Starr, has ordered that every lawyer must certify to him their use [or non-use] of generative AI.
His order requires that:
“All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up-even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.”
To be sure, we can expect more jurists to follow Judge Starr’s lead. Even if they do not, responsible New York attorneys know that the Rules of Professional Conduct require diligence and competence, among other important obligations inherent in the license to practice law. So using generative AI without putting in the hard work of ensuring accuracy, by cite checking through eliable sources or otherwise doesn’t cut it – at least for now. Be forewarned.
The Coppola Firm represents lawyers and other licensed professionals in discipline matters. Our senior counsel, David Brock, has decades of experience and is available for consultations and legal representation. We can be reached at 716.839.9700 or info@coppolalegal.com.