Print this page
Published in News

Software assessing US defendants is inaccurate

by on19 January 2018


Be careful about outsourcing too much decision making to a computer


A programme which is being used to see if a million US criminals will re-offend and have a significant impact on their lives is inaccurate, according to experts.

According to the Guardian, the Correctional Offender Management Profiling for Alternative Sanctions software (Compas) is used to weigh up whether defendants awaiting trial or sentencing are at too much risk of reoffending to be released on bail.

However, it appears that the software was no more accurate at predicting the risk of reoffending than people with no criminal justice experience.

Compass was developed in 1998 and used to assess more than a million defendants. But a new paper has cast doubt on whether the software’s predictions are sufficiently accurate.

Hany Farid, a co-author of the paper and professor of computer science at Dartmouth College in New Hampshire, said: “The cost of being wrong is very high and at this point, there’s a serious question over whether it should have any part in these decisions.”

Farid, with colleague Julia Dressel, compared the ability of the software – which combines 137 measures for each person – against that of untrained workers, contracted through Amazon’s Mechanical Turk online crowd-sourcing marketplace.

They then used a database of more than 7,000 pre-trial defendants from Broward County, Florida, which included individual demographic information, age, sex, criminal history and arrest record in the two-year period following the Compas scoring.

The online workers were given short descriptions that included a defendant’s sex, age, and previous criminal history and asked whether they thought they would re-offend. Using far less information than Compas (seven variables versus 137), when the results were pooled the humans were accurate in 67 percent of cases, compared to the 65 percent accuracy of Compas.

In a second analysis, the paper found that Compas’s accuracy at predicting recidivism could also be matched using a simple calculation involving only an offender’s age and the number of prior convictions.

“When you boil down what the software is actually doing, it comes down to two things: your age and number of prior convictions. If you are young and have a lot of prior convictions, you are high risk.”

He said that these closely guarded proprietary algorithms are not impressive.

“It doesn’t mean we shouldn’t use it, but judges and courts and prosecutors should understand what is behind this.”

Seena Fazel, a professor of forensic psychiatry at the University of Oxford, said that the inner workings of such risk assessment tools ought to be made public so that they can be scrutinised.

“I don’t think you can say these algorithms have no value. There’s lots of other evidence suggesting they are useful.”

The paper also highlights the potential for racial asymmetries in the outputs of such software that can be difficult to avoid – even if the software itself is unbiased.

The analysis showed that while the accuracy of the software was the same for black and white defendants, the so-called false positive rate (when someone who does not go on to offend is classified as high risk) was higher for black than for white defendants.

Farid said the results also highlight the potential for software to magnify existing biases within the criminal justice system. For instance, if black suspects are more likely to be convicted when arrested for a crime, and if criminal history is a predictor of reoffending, then software could act to reinforce existing racial biases.

Last modified on 19 January 2018
Rate this item
(0 votes)