AI and Law

Standard

Artificial Intelligence and the pace at which its capabilities are enhancing every day makes it important that the world adapts to it. Machine learning language models like GPT-1 capable of replicating human writing have been around since 2018. However, it’s the recent iterations including GPT-3 and GPT-4 which have taken the world by storm. An example is this article published in 2020 by The Guardian

A robot wrote this entire article. Are you scared yet, human? | GPT-3 | The Guardian

I can go on forever about entry of AI in our lives including the fact that more than 25% of Google’s new code is generated by AI.

Law, being a subject of nuances, was considered to be safe from the mechanical imitations of regular tasks by AI. But not anymore. The explosive growth of Large Language Models (LLMs) and other Machiner Learning Models will certainly keep Lawyers and legal professionals on their toes.

Current capabilities

The majority of AI tools that have become widely available are based on Large Language Models. These LLMs are trained on human generated content, which can range up to trillions of parameters, and are capable of understanding and generating human language. Various models for speech recognition and image generation also exist.

These language models can presently generate computer code, do math operations and answer questions in natural language. This obviously brings a semblance of intelligence, but experts agree that A.I. is simply remixing and recombining existing writing that’s relevant to the prompt. The generated content is based on patterns and algorithms understood by the model after analyzing its large dataset. It is up for debate when will AI be considered truly intelligent i.e. capable of human intelligence or exceeding it, but it can be sooner than expected.

Limited by their dependence on existing data set and algorithms, the widely available models like ChatGPT and Gemini suffer from ‘hallucinations‘. AI models are said to be hallucinating when something false is presented as a fact.

Example:

In this example I have queried Copilot by Microsoft (powered by GPT-4 by Open AI) regarding cases where contracts by minors have been held to be valid by Indian Courts.

The first result refers to the case of Mr. Sharma v. Mr. Nitin. The first source is the Lawctopus article – Contract with a minor – Mr. Sharma v. Mr. Nitin which cautions the readers regarding the case being a fictitious case.

The inability of the model to filter out the fictitious case makes it unreliable in its present form. A lawyer in USA understood this the hard way when sanctions were imposed on him for citing six precedents with “bogus judicial decisions with bogus quotes and bogus internal citations” of cases which did not exist.

What does the future hold

Surely, the present state of affairs makes the usage of AI tools unreliable and borderline dangerous. They can give one a good starting point for research, generation of ideas or summarizing large bodies of text. But since the practice and profession of law is about nuances, it will take some time for AI models to become reliable for legal professionals or even replacing them. How much time? We don’t know. It could be months, years or decades.

Parting thoughts

Away from the analysis of AI’s capabilities, an existential question stares at us in our faces. The question of becoming obsolete. Humans have evolved over the course of millions of years to be intelligent beings with goals, purposes and motivation. Our minds, DNA and hormones are programmed to make us do things which have some meaning and purpose. Are we staring down the barrel of a dystopian future where AI has made human ability redundant?