Durham Police gain a Hart but can’t lose their soul

By Harry Llweyen

Deciding who to keep in custody and who to release on bail is often a difficult decision for the police and probationary services. The consequences of getting it wrong are serious. To help make better-informed decisions, Durham Constabulary has introduced an Artificial Intelligence (AI) aid system.

The Harm Assessment Risk Tool (Hart) was first introduced in 2013 after being trained on over 100,000 custody events.

It classifies suspects into three groups: those with low, medium, or high risk of reoffending and then monitors reoffending rates in the future.

It continues to be used as a decision-making tool in bail decisions within Durham Police today, though there has been some criticism of this latest AI member of the Police. So how does Hart work? How useful is it? And will it replace human decision altogether?

The AI will see classifying an offender as low risk, who then reoffends, as a costlier mistake than classifying someone who does not reoffend as high risk. This shows in the results, since the former scenario happened in only 2 % of cases, whereas the latter occurred in 12 %.

The AI itself carries out statistical analysis using 34 predictors, most of which relate to the offender’s prior criminal history. Some concerns have been raised over the inclusion of postcode and gender as predictors. Frederike Kaltheuner, Policy Officer for Privacy International, told Mashable that a “subset of the population […] have a much higher chance of being misclassified.” Worryingly, arrests in a targeted area would then be used to ‘improve’ the model, which could further increase the weighting of postcode biases.

A similar algorithm used by US Police has been criticised for racism due to the inclusion of ethnicity as a predictor. ProPublica published a report in 2016, claiming the US algorithm forecasted excessive negative outcomes for black suspects. The technology firm that created the algorithm denied these allegations, but the room for controversy over these predictors is apparent.

Hart does avoid such issues, since predictors of address and gender are combined with the other 32 predictors in thousands of combinations before a result is obtained. This means that no single predictor has an unavoidably significant impact, and ethnicity as a predictor is completely excluded to avoid racial bias.

Sheena Urwin, the head of criminal justice of the Durham Constabulary, who wrote a paper in 2016 assessing the learning algorithm, explained to the BBC that “the AI was only a tool to ‘support officers’ decision making”. In fact, a full record of the machine’s process to arrive at its decision is recorded for later examination by officers.
The introduction of Hart suggests a direction towards increasing reliance on AI by the UK police, and the system is currently being adjusted to remove one of the postcode predictors.

Despite some inevitable criticism, this trial has been widely regarded as successful, and Durham could soon be seen as the birthplace of AI systems for assisting officers.

This does not mean that AI will become the sole decision maker for the Police. “It’s important to stress that accuracy and fairness are not necessarily the same thing”, Mr Kaltheuner said. This highlights the primary flaw inherent in these AI systems: the decisions they make is only as effective as the information they use.

Whilst Hart may become increasingly accurate in its predictions, this does not necessarily mean that it can ever replace human judgement which is based on local or recent information not accessible to the machine.
In fact, perhaps it never will.

Photograph: Pexels

Related News

Leave a Reply

Your email address will not be published. Required fields are marked *

 

© Palatinate 2010-2017