How faulty AI algorithms bluff U.S. criminal justice

Jan 22, 2019, 6:47 AM EST
(Source: Tony Webster/flickr)
(Source: Tony Webster/flickr)

At the end of 2016, around 2.2 million adults were locked up in the U.S. jails along with 4.5 million others in correctional facilities – many of them destined to go through contentious risk assessment algorithms.

These AI-powered risk assessment tools have been shouldering the burden of judges in courtrooms across America but work on a flawed premise altogether. When fed with a defendant’s profile, they attach a recidivism score to the individual and this number, to a great extent, dictates their fate in the legal funnel, writes MIT Technology Review.

The algorithms, trained on historical crime data, identify patterns associated with crime and assign scores by considering correlations as causations, leaving certain groups vulnerable, especially low-income and minority communities, as they fit into the classic definitions of “would-be criminals.”

These automation tools amplify and perpetuate implicit biases, even creating more datasets plagued with prejudiced stats to fuel the vicious cycle, as highlighted at Data for Black Lives conference last week.