For anyone watching the news lately, the advancements in AI (notably of OpenAI and its star project ChatGPT) must have caught some attention.

In healtcare, for example, AI-powered solutions have been used for some time now in clinical settings and ongoing research. Despite some skepticism, they have been a trustworthy ally for healthcare professionals. It is, therefore, safe to wonder if the legal industry will react similarly to Machine Learning systems, especially those specifically designed to address legal problems.

The pandemic has made automatization and digitization play a vital role in the legal business. So much so that stakeholders now contemplate AI’s role in the new legal architecture. And, going further, the place of lawyers in the AI 2.0 era.

Has the point of no return been reached?

Until now, legal professionals were needed to feed the algorithm with legal information, logical reasoning, and intelligence. Training AI systems requires insane amounts of curated data and processing power, a job suitable for a Legal Data Scientist. More on this role here. The Legal Data Scientist is usually responsible for reviewing the data and training the algorithm in a controlled environment.

In an attempt to scale the process and for the sake of scientific (and, dare I say: business) advancements, the new generation AI has gained access to curated Internet data repositories. It did not gain uncontrolled access to the Internet in an attempt to protect the system from biases and all sorts of extremist rhetoric. This strategy, however, increased performance and reliability. But it did not make the AI-powered solutions infallible. 

And this brings us to the new cool kid on the block: ChatGPT. It has so far proven its skills as a conversational bot. Still, there is a lively debate as to whether ChatGPT can perform legal-related tasks, such as law interpretation, legal document drafting, case law research, e-discovery, etc.

Before dreaming of a dystopian (?) world where justice will be mediated and delivered by AI, we should first understand how this Machine Learning product actually works. To form sentences, ChatGPT uses words and structures found on the Internet and fed by humans during the training process. It does that by calculating the probability of certain words following others and/or creating constructs with one another. This is a pure probability/statistical test. It does not involve any logic, let alone legal logic. To put it bluntly, “ChatGPT has zero understanding of the text it generates” [1]. More so, GPT training does not create an imperative for the system to limit itself to pure facts: it is instructed to be a creative text generator. [2]

Therefore a layperson cannot use ChatGPT for legal-related tasks. Like the Tesla auto-pilot system, it can function well enough, provided it has human supervision. Specifically, it needs a legal professional to direct, review and correct any deviation.

Are we there yet?

Photo credit: thetrialwarrior.files.wordpress.comSource: Obiter Dicta

While language model AI and legal AI systems cannot properly perform a qualified lawyer’s job, they are making remarkable advancements. After the pandemic change-of-paradigm marathon, legal professionals will most likely accept these new developments, adapt to them, and prepare their practices for the new wave.

It’s not a matter of choice; instead, it’s a matter of necessity. The more sophisticated their clients, the more they’ll ask for scalable services that can only be achieved through high-level automatization. This will push down the prices for these types of services, making the use of humans non-sensical. Automatization will, in turn, allow humans to focus on human-centric services such as business development, client onboarding, and grey-zone legal advice, services that pay off beautifully and are worth investing more time in.  

At this point, ChatGPT does not overcome regular yet rigorous research. The contract it drafts and the search results it spills do not even top the ones you could find on the Internet for free. It may, however, be more creative in combining them, and it is there where its added value lies. But when legal factuality is required, one cannot count on this product.

AI language models come with great potential but with notable limitations and risks. The most significant problem so far is the lack of transparency regarding how Machine Learning systems, including ChatGPT, actually work. Not even their creators fully understand why the system chose a particular linguistic construct and not other to depict a given situation. In the absence of a backdoor or any interrogation method we can use to explain, understand, defend and argue against this system, its results are merely anecdotal and amusing. Sometimes the results are reasonable and hold water from a legal perspective. But that does not prove the infallibility of the solution: even a broken (mechanical) clock is right twice a day. Unless we can be 100% sure that the results are foolproof or at least transparent, we cannot entrust complex legal issues to ChatGPT.

Arguably, ChatGPT can bring added value to the legal industry through the use of legal chatbots, text recommendations for legal documents, etc. With ChatGPT passing the Turing test[3], it even feels that the system is moving in the right direction. But legal professionals can be at ease: their jobs are safe…for now.





Copy link
Powered by Social Snap