Chicago-Kent Professor's Research Finds Teaching Artificial Intelligence Models in Legalese is Surprisingly Efficient
How good of a lawyer should an artificial intelligence system be? More importantly, how good do you want one to be?
When it comes to teaching machines how to understand and utilize language, it turns out that the more legalese that they know, the better, according to a new paper co-authored by Chicago-Kent College of Law Professor and Law Lab Director Daniel Martin Katz.
“It is pretty clear that a legally trained AI system is just going to perform better—but the open question is to identify the precise information diet to feed these models” Katz states simply.
His paper, “LexGLUE: A Benchmark Dataset for Legal Language Understanding in English,” explores how different large language models (LLM’s) were used to solve a variety of tasks.
Pioneered by organizations such as Google, OpenAI, Allen Institute, Large Language Models such as Bert, Elmo, GPT-3, etc. have grown increasingly popular in the field of Natural Language Processing. Many LLM’s have been trained on general language but the question which Katz and his colleagues sought to explore is how to apply these LLM’s to legal tasks. They analyzed several the different models—to evaluate the performance of LLM’s on tasks such as evaluating contracts including determining if such contracts were unfair under European Union consumer law.
“A lot of effort in computer science goes into making machines understand language broadly,” Katz says. “How do you train a machine in the language of law? Well, how do you train a person? You send them [to law school] for three years, and you say a lot of words at them. You use words in a variety of contexts. In a real sense, you are training a student’s neural network (their brains).”
That’s what the models tested in Katz’s paper did: they exposed machines to a large corpus of different words, and measured how effective those words were at getting the machines to solve tasks.
It turned out, of the seven different models that were tested, the model that taught legal language got the machines, on average, to perform tasks better—not just legal tasks, but any type of task.
“The diet of getting legal information when it’s being trained makes it better across all tasks,” Katz says.
The paper has been deemed intriguing enough to be accepted for presentation at the Association for Computational Linguistics annual 2022 meeting in May.
“It’s a rare thing to see a law professor get a paper accepted into a computer science conference,” Katz notes. “It’s the type of place you should take this type of work: a group of people that can actually evaluate its technical merits.”
It’s a research area on the cutting-edge of both computer science and the law; it’s an area that Illinois Institute of Technology and Chicago-Kent College of Law are uniquely situated to excel in, Katz notes.
“Even though machines are getting good at understanding basic language, it’s a much harder problem to understand specialist languages: medical English, or law,” Katz says. “We’re trying to answer: How do we build the scientific infrastructure to have machines understand legal language?”
“Laws and their interpretations, legal arguments and agreements, are typically expressed in writing, leading to the production of vast corpora of legal text. Their analysis, which is at the center of legal practice, becomes increasingly elaborate as these collections grow in size,” the authors note in the paper, adding that “natural language understanding technologies can be a valuable tool to support legal practitioners in these endeavors.”
Along with Katz, the paper is co-authored by Ilias Chalkidis of the University of Copenhagen, Denmark; Abhik Jana of the Universität Hamburg, Germany; Dirk Hartung of Bucerius Law School, Hamburg, Germany; Michael Bommarito of CodeX, Stanford Law School; Ion Androutsopoulos of the Athens University of Economics and Business, Greece; and Nikolaos Aletras of the University of Sheffield, United Kingdom.