Dazzling technology can help perform legal tasks, but is lacking in judgment

By Mark Green

Kentucky attorneys and law firms love the idea that artificial intelligence has the ability to review vast amounts of information almost instantly and can perform complex writing tasks faster than the eye can read the results.  

 That said, however, Kentucky’s legal community is taking a very cautious approach to AI. The dazzling technology is not considered reliable enough to produce finished work that can be presented in court.   

 Legal tools that are powered by artificial intelligence have actually been used regularly for many years and become so universally accepted so as not to require client input.  

“Use of Lexis and WestLaw to conduct legal research and to confirm that a case remains good law is standard practice,” said A.J. Singleton of Stoll Keenon Ogden, an expert in legal ethics. “Likewise, lawyers’ use of document-review software with predictive coding—now known as Technology Assisted Review, or TAR—has become not only commonplace but largely expected. While this is not the ChatGPT-like Generative AI that has prompted much interest of late, these programs are still forms of AI that lawyers use on a daily basis.”   

Singleton chairs the American Bar Association Business Law Section’s Professional Responsibility Committee, is special advisor to ABA’s Standing Committee on Ethics and Professional Responsibility and is a Kentucky Bar Association’s Ethics Committee member. He conducts continuing legal education presentations on ethical obligations in the use of technology in representing clients.  

 Seemingly sentient AI bots burst into general cultural awareness in late 2023 when the ChatGPT chatbot was unveiled to have stunning interactive language skills and seemingly deep awareness of nearly all fields of knowledge, learned by absorbing billions of pages of internet content. It has been joined by Microsoft Copilot, Jasper, Anthrophic’s Claude, Google’s Bard (now known as Gemini) and others.   

 AI technology is suddenly everywhere.  

The Lane Report asked a number of Kentucky law firms about whether and how they are using AI, if they have special guidelines for it and about client feedback.  

  The jury is still out  

“For many of your questions, the off-the-cuff answer is—from my perspective—it’s too early to tell,” Singleton said in an email. “Lawyers can and should be cautiously excited about Generative AI being able to assist their practice. And I think that there will be efficiencies in the future.”  

 Lawyers must understand both the benefits and risks of any technology used in representing their client, he said.   

“Any number of lawyers, law firms, and bar associations are looking at AI, particularly Generative AI,” he said, “to determine how and whether to use Generative AI, and what policies to put in place for how to use it. From a legal ethics standpoint, the ethics rules involving competence, confidentiality, communication with the client, and supervision are all part of these discussions.”   

The first two points are especially important.   

 “With respect to confidentiality, a lawyer has an ethical duty not to disclose any information relating to the representation of a client without express client consent, or unless the disclosure is impliedly authorized or the disclosure fits specific enumerated exceptions. When using an AI platform, the lawyer needs to appreciate the extent to which they are disclosing client information to the AI program (and its shared database) and what that AI program is doing with the information the lawyer inputs or uploads. The lawyer also has to be competent to know whether the output the AI platform generates is accurate or reliable.”  

 No one is diving in.  

“We have tested some AI tools in connection with our practice,” said Richard Mains, member of Rose Camenisch Steward Mains of Lexington. “In our experience, these tools still appear to be in their early stages. While they can assist a lawyer in some aspects of drafting and summarizing legal documents, they are still of relatively limited assistance.”  

 AI does offer some short-cut assistance on tasks such as quickly writing general transmittal letters and e-mails, similar to how it is being deployed by businesses outside of the legal area, Mains said.  

 Dickinson-Wright is also evaluating and testing AI tools, said Brian M. Johnson, the firm’s South Litigation practice group chair in Lexington.  

“But we are being very cautious about them due to concerns about reliability and security of client information,” Johnson said. “Most of the well-known AI products that people typically think of these days do not fit well with the legal practice. For example, we’ve seen other firms sanctioned for making filings containing false information generated by AI.”  

 AI use and liability   

One key area of legal community focus regarding artificial intelligence tools is how they can get a user into trouble, primarily regarding intellectual property.  

There are two real-life situations in which AI has created a buzz in the legal community, and both have firms tapping the brakes rather than the accelerator. 

The first arises from a 2023 legal case in federal court in Manhattan—a high-dollar practice venue—in which a U.S. judge fined two lawyers and a law firm $5,000 because fake citations generated by ChatGPT were submitted in a court filing.   

Steven Schwartz, Peter LoDuca and their law firm Levidow, Levidow & Oberman were ordered to pay for filing a brief that referred to six cases that ChatGPT invented—a “hallucination” in tech talk—when they used the chatbot to help write a legal response in a case against Colombian airline Avianca.  

In the second, The New York Times sued OpenAI and Microsoft last December for having used the news organization’s copyrighted information content to “train” AI chatbots that then grow into competitors in providing information.   

In late April, eight more newspapers owned by Alden Global Capital, the second-largest newspaper operator in the U.S., also accused OpenAI and Microsoft of illegally using their content to build generative AI chatbot products. The Alden suit claims damage from ChatGPT has given fabricated responses to queries about what products or solutions individual newspapers recommend: For instance, stating that a large news publication had advised that research showed smoking could cure asthma.  

Doing work, but also creating work  

Artificial intelligence tools might not be proficient enough for creating finished legal work, but they are plenty good at creating issues for lawyers to sort through for clients.  

Stephen Hall, an intellectual property practitioner with Wyatt, Tarrant & Combs, and Brantley Shumaker, an intellectual property lawyer and principal partner at Gray Ice Higdon in Louisville, discussed AI issues at an April event held by Technology Association of Louisville Kentucky (TALK) that focused on how AI and IP are evolving.  

Hall pointed out that AI is phenomenal at sifting large data sets of any kind and recognizing patterns, as well as what the next element of the pattern—the “response”—is most likely to be. But, it lacks the ability to discern whether the pattern is true—or legal.  

“Things are moving extremely fast,” Shumaker warned. “It’s been remarkable. I’ve been working with AI-related inventions for about 10 years now. I’ve seen patent applications I wrote three or four years ago are almost obsolete now completely because they’ve discovered better ways of doing almost everything.”  

AI is very good writing computer code, he said. With “prompt engineering,” AI tools can create new solutions to many tasks. The question for intellectual property lawyers like himself, Shumaker said, is determining what protectable intellectual property is.  

“Is (the inventor) you because you told the AI app to do it, or is it the AI?” Shumaker said. “The patent office doesn’t award patents to computers.”