top of page
Writer's pictureRodney Morrison

91 Indictments and an AI Judge: Are you Ready for a Verdict?




Imagine you're sitting in a courtroom, but instead of a judge or jury, there's an AI avatar at the bench. It's just finished analyzing millions of pieces of evidence—voice mails, contracts, emails, texts, and testimonies—in minutes. Now, it's about to deliver a verdict on 91 indictments. Sounds like science fiction, right? But with the way AI is progressing, this could very well be a snapshot of our future legal system.

 

Let's chat about this. On one hand, the idea is pretty cool. Think about it: AI could cut through the backlog of cases faster than any human ever could. It's not swayed by hunger, sleep deprivation, or personal biases. It could treat every piece of evidence with the same level of scrutiny, potentially making the justice system more fair and efficient.

 

But then, there's the human side of things. Justice isn't just about processing data and spitting out verdicts. There's a lot more at play. When judges and juries make decisions, they're not just looking at the facts. They're considering the context, the intent behind actions, and sometimes, the potential for redemption. These are deeply human considerations, grounded in empathy and social understanding. Can a machine really grasp the concept of mercy or the nuances of motivation?

 

And let's not forget the whole issue of trust. Would people accept a verdict delivered by AI? Trust in the judicial system is already a complex issue, and introducing AI into the mix could complicate things further. Plus, AI isn't perfect. It's made by humans, after all, and can inherit all our biases. An AI trained on past legal decisions might perpetuate historical injustices, thinking it's just following the data.

 

There's also a bit of a 'black box' problem with some AI systems. They can analyze data and make decisions in ways that aren't always transparent, which could raise questions about accountability. If an AI makes a controversial decision, who's responsible? The developers? The data it was trained on? It's a tough nut to crack.

 

AI is also being used in traditional law firms to expedite the creation and management of contracts for business usage. However initial attempts to use a large data base of contracts, to generate something usable for a specific instance remains challenging.

What was missing?  While there were many provisions with different language to capture a variety of business issues, perhaps a smaller subset were absolutely crucial. Which ones? Can you tell by the contract?  What was the actual intent of the provision? Do we know why it was included?

 

In this process, humans are where the value lies.  The answer is not in the data.

This discussion touches on something I am very interested in as far as AI adoption for business uses. When providing consulting with businesses on how best to implement AI initiatives, the very first business process discovery needs to take into consideration – “What is the human value?”. In other words, what are we going to rely on humans to do properly in this process to provide the best outcome?

 

In my next piece to follow up on this I am going to drill down to a very similar process. One that involves the business of conflict resolution and mediation. This industry is very worried about the potential impact of AI.  Will AI replace the highly trained mediator?

 

Joining me on this journey is Rae Kyritsi who is a dispute resolution specialist at Caldera Dialogue and Consulting Services. She supports the development of mediators, conflict engagement programs, and community dialogue. Together we will attempt to allay the fears of an AI takeover and focus on how we discover the value of being human.


Interested in learning more about how we can help you operationalize these concepts into your own practice or project? Contact us!





bottom of page