Sarah Savage logo

A fusion of development and business

Juries and AI as critical infrastructure

A few years ago, I had an opportunity to serve on a jury. Twelve ordinary citizens were called to hear a case, where we heard from five supremely confident but deeply conflicting witnesses. Our job was to sort out who was credible, what they were credible about, and make a finding of fact out of the witness mess. The defendant did not testify, which made things more challenging – the only evidence we had to go on was the perspective of essentially five bystanders who all had different testimony.

Twelve of us got into a room and started talking. We did point-counterpoint on the various things that were said by the witnesses. Eventually we reached consensus and announced our verdict: not guilty. This was one of two possible outcomes (the other being guilty). The defendant went home a free man.

What does this have to do with AI? Plenty.

Juries, AI and Black Boxes

A jury is essentially a “black box” – testimony goes in, deliberated facts come out. A jury’s job is to take all the evidence, weigh it against an opaque scale to determine its relevance and credibility, and produce a guilty or not guilty verdict based on its best reasoning ability. There’s a lot riding on the outcome, but it’s a process we’ve accepted as the best tool we have for determining guilt or innocence.

AI works very similarly. You give it inputs, and it references all the information it has at its disposal to weigh a response and deliver a result. Many times it gets it right. Often times it gets it wrong. AI can hallucinate, because its inputs are faulty or because its reasoning is flawed, either by incorrect design or incorrect weight. And nobody really has insight into the decision-making process.

You might say that AI has serious consequences when it makes mistakes. And that has been true. It’s true that a jury mistake can have severe consequences, too.

Essentially, it’s handy to consider AI and a jury to be black boxes in their own right.

So what now?

The fact of the matter is AI isn’t going anywhere any time soon. We all love the notion of shared schadenfreude at the collapse of the AI bubble, and there’s decent evidence that we are, in fact, in a bubble right now. But the collapse of the dot-com bubble didn’t end the internet or reset us back to the days before computers. It was a temporary setback, and something else, something theoretically more sustainable emerged.

There are serious and real concerns about the proliferation of data centers, the utilization of electricity and clean water, and the hype and over-reliance on AI technology. There are concerns about jobs, specifically white-collar professional jobs, that are very real. And there are most certainly ethical concerns with ingesting copyrighted and privileged materials to synthesize a response, bypassing website documentation and reducing traffic to businesses that rely on visitors to produce revenue.

At the same time, these concerns will, over time, lessen. AI will get more efficient. The bubble will burst and computer component prices will come back to earth. The strengths and limitations of AI will become well understood. The creation of AI will not eliminate jobs, as much as it will alter their existence, and AI will augment them to reduce grunt work and focus on human analysis. The complaints about copyright infringement are real, and the toll is significant – and yet Napster and The Pirate Bay communicated to the entertainment industry that their model needed to shift, and they responded with Spotify and Netflix.

The future I am preparing for

I am by no means an AI apologist. I have not drunk the Kool-Aid and I do not think that the current model of AI is sustainable or a solution to every problem. In fact, quite the opposite: I graded AI’s performance on real-world tasks and found it lacking last month. Even in preparing for this article, when I provided it my previous article, it made mistakes in interpreting those facts – in much the same way that a human might have – but that could have led to inaccurate conclusions if I had not caught the error.

And yet, as a person who relies upon and implements technology, I am a realist. Railing against a new technology because of its flaws does not stop it from becoming real or being adopted. Technology improves – it’s why I’m typing this on a Mac Mini that’s slightly bigger than my phone, instead of writing it on a computer that fills a room in my house.

I think it makes sense to be a realist about what is coming with AI. Its uses have yet to be fully established, and its benefits have yet to be fully realized. Yes, there are ethical problems. Yes, there are environmental concerns. Yes, there are intellectual property considerations. And yes, these are solvable problems.

Ultimately, the value in using a tool like AI is to be an expert in whatever you’re commanding the tool to do – in much the same way that a chisel in the hands of a woodworker produces art, where the same tool in the hands of a layperson produces garbage. If you can catch the mistakes and reason about the output, you can treat AI like a tool – an imperfect tool – instead of an authority.

Will I use AI in the future? Undoubtedly yes. Do I still have concerns? Absolutely. And yet, I look forward to a future where those concerns can be addressed ethically, reasonably, and comprehensively, to the benefit of humanity.

Thoughts? Leave a reply

Your email address will not be published. Required fields are marked *