Predictive Policing: Bias In, Bias Out

Speaker: Kristian Lum
Date recorded: Nov 17, 2016
Machine learning algorithms are designed to learn and reproduce patterns in data, but if biased data is used to train these predictive models, the models will reproduce and in some cases amplify those same biases.

Kristian Lum will elaborate on the concept of “bias in, bias out” in machine learning with a simple, non-technical example. She will then demonstrate how applying machine learning to police records can result in the over-policing of historically over-policed communities. Using a case study from Oakland, CA, she will show one specific case of how predictive policing not only perpetuates the biases that were previously encoded in the police data, but – under some circumstances – actually amplifies those biases.