Published in AI

US supremes worried about AI

by on03 January 2024


Might get in the way of Judges being bribed

The US Supreme Court has spoken out about the use of artificial intelligence (AI) in the legal system, admitting its potential but warning against 'making the law less human'.

The 2023 Year-End Report on the Federal Judiciary gives a 13-page summary of the past year in the US legal system. This year US top judge John G. Roberts chose to focus on AI, comparing it to past tech breakthroughs like personal computers.

Roberts said that  AI could help those who could not afford a lawyer by making new, easy-to-use tools that give answers to simple questions, like where to get templates and court forms, how to fill them in, and where to take them to the judge.

But Roberts also warned that AI comes with risks, especially when used wrongly. He said that a lot of decision-making in the legal system needs human judgement, choice, and understanding of detail. Just giving such power to a computer program is likely to lead to poor and unfair results, especially as AI often has hidden bias.

"In criminal cases, the use of AI to judge flight risk, repeat offending, and other mostly choice-based decisions that involve guesses has raised worries about fair treatment, trustworthiness, and possible bias,"wrote Roberts.

"At the moment, studies show a lasting public view of a 'human-AI fairness gap', meaning that people think that human rulings, even with their faults, are fairer than whatever the machine comes up with."

Roberts said that many AI uses help the legal system sort out cases in a 'fair, quick, and cheap' way. But he warned that AI isn't right for every situation, and that "courts will have to think about its proper uses in cases" as the tech changes.

"I reckon that human judges will be around for a while. But I also reckon that legal work - especially at the trial level - will be greatly changed by AI. Those changes will affect not only how judges do their job, but also how they understand the role that AI plays in the cases they deal with," Roberts said.

Sadly, legal experts' knowledge of AI is already behind its eager use in some cases, with AI having a dodgy impact on the US legal system so far.

Last year two lawyers were fined for using made-up cases in a legal document after using OpenAI's ChatGPT. The AI chatbot had totally invented six cases, which the lawyers then tried to use in their arguments. One of them said he had been "unaware of the chance that its content could be false."

Even though this case was widely covered, not all lawyers seem to have learned their lesson about relying too much on AI. Another US lawyer was recently caught out for also using fake cases, having not checked them after his client made them using Google Bard.

One client was banned former Trump lawyer Michael Cohen, who said last week that he thought Bard was a 'super-charged search engine' and didn't know it could make up results.

Last modified on 03 January 2024
Rate this item
(1 Vote)