Close Close Comment Creative Commons Donate Email Add Email Facebook Instagram Mastodon Facebook Messenger Mobile Nav Menu Podcast Print RSS Search Secure Twitter WhatsApp YouTube
Protect Independent Journalism Spring member drive deadline: Friday
Donate Now

How We Decided to Test Racial Bias in Algorithms

In 2014, former Attorney General Eric Holder wrote a letter to the U.S. Sentencing Commission asking them to study racial bias in risk scores. It never did the study. So ProPublica did. The team of reporters, led by Julia Angwin, found that an algorithm being used across the country to predict future criminals is biased against black defendants. And what’s more, the algorithm is not very good. Only 20 percent of the people predicted to commit violent crimes actually have actually been found to have done so. I spoke with Julia about the investigation and how her team is uncovering machine bias.

Sade Jones, who had never been arrested before, was rated a medium risk. (Josh Ritchie for ProPublica)

The investigation didn’t start with criminal justice.

Angwin: Honestly, my first foray into this was all over the place. I was looking at whether you might get different airline prices. You know how whenever you're looking online for an airline price you just know you're kind of being screwed, but you can't figure out how. It seems like you check the price again an hour later and it's gone up. I did a lot of testing of those kinds of algorithms before I ended up in the criminal justice system.

The public defenders didn’t know about the risk scores.

Angwin: When I sat down with the public defenders who represent clients in the courtroom, they actually said, "Well. we've never heard of these risk scores; we didn't know they were being used in our courtroom." I was like, "What?" I was shocked, and I talked to them about it, and I promised to tell them everything I knew about it.

The team wasn’t initially going to calculate violent recidivism scores, but when they did…

Angwin: I made Jeff recalculate it four times. I was like, "No, it can’t only be right 20 percent of the time. That doesn't make sense." Then I ran the numbers myself. And I was like "Wait, it says 20 percent," and then we ran it four more times.

Listen to this podcast on iTunes, SoundCloud or Stitcher. For more, read Julia Angwin's Machine Bias and What Algorithmic Injustice Looks Like in Real Life.

Follow ProPublica

Latest Stories from ProPublica

Current site Current page