The city of Chicago is using an algorithm to predict whihch individuals are likely to be the victim or perpetrator of a crime. It sounds like the premise behind the TV show Person of Interest. But it's not TV, it's real. Chicago, a city that has seen its crime rate surge, is using this algorithm in an attempt to help get crime under control.
That's a good thing if it can help reduce crime. But there are ethical concerns about using data in this way. Do we want to predict the likelihood of someone committing a crime, like in the film Minority Report, and incarcerate them before the crime even happens?
What about other ethical concerns, like coding human biases into the algorithm? Such ethical questions will be raised more frequently as we humans rely on AI to help us make decisions and predict outcomes. The ethics and morality of AI will be the final topic in our current AllAnalytics Academy.
Chicago's Crime Algorithm
Chicago's algorithm has been used to create something called the Strategic Subject List or SSL, according to a report about the algorithm published by the New York Times this week. The algorithm is applied to arrested subjects to prioritize resources on those who pose the highest risk. Risk scores range from 0 to 500, and nearly 400,000 people have been scored.
Chicago is keeping the factors that go into its algorithm a secret, although the report notes that Chicago's algorithm does not use variables which could pose the risk of bias such as gender, race, or geography. It's truly a black box that spits out scores.
However, the New York Times reporters who wrote the story have taken their work a step further by using the publicly available data that Chicago has released and reverse-engineering the impact of each characteristic on the final risk scores. The writers say they used a linear regression model. Characteristics factored into the scoring include number of assault and battery incidents as a victim, number of shooting incidents as a victim, number of arrests for violent offenses, age per decade, gang affiliation, and more.
According to the report, the most significant characteristic for SSL risk scores is the age of the potential victim or offender. The report notes that no one older than 30 falls within the highest risk category. Yet, arrests for domestic violence, weapons, or drugs were found to be less predictive for future crime involvement, and gang affiliation had barely any impact on the risk score.
The article is a really interesting look into how the scores work. It's unclear whether the algorithm and its use is making a dent in Chicago's crime problem.
But it does raise interesting issues about how we as a society use data. What is ethical? What is moral? How can we avoid bias?
The ethical and moral questions around artificial intelligence are the subject of our final AllAnalytics Academy session on Thursday, June 15 at 2 pm ET/11 am PT. We're excited to welcome Rumman Chowdhury, a senior manager at Accenture, to talk about the place where human ethics meet AI and machine learning, and how organizations should tackle these issues. We hope you can join us on Thursday, and you can register at any time at this link.