Hello, friends,
I first learned the term “risk assessments” in 2014 when I read a short paper called “
Data & Civil Rights: A Criminal Justice Primer,” written by researchers at
Data & Society. I was shocked to learn that software was being used throughout the criminal justice system to predict whether defendants were likely to commit future crimes. It sounded like science fiction.
I didn’t know much about criminal justice at the time, but as a longtime technology reporter, I knew that algorithms for predicting human behavior didn’t seem ready for prime time. After all, Google’s ad targeting algorithm thought I was a man, and most of the ads that followed me around the web were for things I had already bought.
So I decided I should test a criminal justice risk assessment algorithm to see if it was accurate. Two years and a lot of hard work later, my team at ProPublica published “
Machine Bias,” an investigation proving that a popular criminal risk assessment tool was biased against Black defendants, possibly leading them to be unfairly kept longer in pretrial detention.
Specifically what we found—and detailed
in an extensive methodology—was that the risk scores were not particularly accurate (60 percent) at predicting future arrests and that when they were wrong, they were twice as likely to incorrectly predict Black that defendants would be arrested in the future compared with White defendants.
In other words, the algorithm overestimated the likelihood that Black defendants would later be arrested and underestimated the likelihood that White defendants would later be arrested.
But despite those well-known flaws, risk assessment algorithms are still popular in the criminal justice system, where judges use them to help decide everything from whether to grant pretrial release to the length of prison sentences.
Last year, The Markup investigative reporter Lauren Kirchner and Matthew Goldstein of The New York Times
investigated the tenant screening algorithms that landlords use to predict which applicants are likely to be good tenants. They found that the algorithms use sloppy matching techniques that often generate incorrect reports, falsely labeling people as having criminal or eviction records. The problem is particularly acute among minority groups, which
tend to have fewer unique last names. For example, more than 12 million Latinos nationwide share just 26 surnames, according to the U.S. Census Bureau.
Todd obtained documents from four large universities that showed that they were using race as predictor, and in some cases a “high impact predictor” in their risk assessment algorithms. In criminal justice risk algorithms, race has not been included as an input variable since the 1960s.
At the University of Massachusetts Amherst, the University of Wisconsin–Milwaukee, the University of Houston, and Texas A&M University the software predicted that Black students were “high risk” at as much as quadruple the rate of their White peers.