Algorithms and Ethics: Moral Considerations in AI


(Image: Prath/Shutterstock)

(Image: Prath/Shutterstock)

The city of Chicago is using an algorithm to predict whihch individuals are likely to be the victim or perpetrator of a crime. It sounds like the premise behind the TV show Person of Interest. But it's not TV, it's real. Chicago, a city that has seen its crime rate surge, is using this algorithm in an attempt to help get crime under control.

That's a good thing if it can help reduce crime. But there are ethical concerns about using data in this way. Do we want to predict the likelihood of someone committing a crime, like in the film Minority Report, and incarcerate them before the crime even happens?

What about other ethical concerns, like coding human biases into the algorithm? Such ethical questions will be raised more frequently as we humans rely on AI to help us make decisions and predict outcomes. The ethics and morality of AI will be the final topic in our current AllAnalytics Academy.

Chicago's Crime Algorithm

Chicago's algorithm has been used to create something called the Strategic Subject List or SSL, according to a report about the algorithm published by the New York Times this week. The algorithm is applied to arrested subjects to prioritize resources on those who pose the highest risk. Risk scores range from 0 to 500, and nearly 400,000 people have been scored.

Chicago is keeping the factors that go into its algorithm a secret, although the report notes that Chicago's algorithm does not use variables which could pose the risk of bias such as gender, race, or geography. It's truly a black box that spits out scores.

However, the New York Times reporters who wrote the story have taken their work a step further by using the publicly available data that Chicago has released and reverse-engineering the impact of each characteristic on the final risk scores. The writers say they used a linear regression model. Characteristics factored into the scoring include number of assault and battery incidents as a victim, number of shooting incidents as a victim, number of arrests for violent offenses, age per decade, gang affiliation, and more.

According to the report, the most significant characteristic for SSL risk scores is the age of the potential victim or offender. The report notes that no one older than 30 falls within the highest risk category. Yet, arrests for domestic violence, weapons, or drugs were found to be less predictive for future crime involvement, and gang affiliation had barely any impact on the risk score.

The article is a really interesting look into how the scores work. It's unclear whether the algorithm and its use is making a dent in Chicago's crime problem.

But it does raise interesting issues about how we as a society use data. What is ethical? What is moral? How can we avoid bias?

The ethical and moral questions around artificial intelligence are the subject of our final AllAnalytics Academy session on Thursday, June 15 at 2 pm ET/11 am PT. We're excited to welcome Rumman Chowdhury, a senior manager at Accenture, to talk about the place where human ethics meet AI and machine learning, and how organizations should tackle these issues. We hope you can join us on Thursday, and you can register at any time at this link.

Jessica Davis, Senior Editor, Enterprise Apps, Informationweek

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, predictive analytics, and big data for smarter business and a better world. In her spare time she enjoys playing Minecraft and other video games with her sons. She's also a student and performer of improvisational comedy. Follow her on Twitter: @jessicadavis.

Big Data for Good: Real World Use Cases

Looking for some real world case studies that embed analytics into organizational processes for better outcomes? We've got a few for you.

Algorithms and Ethics: Moral Considerations in AI

Can algorithms help predict and therefore prevent crime? Is it ethical to do so? Chicago is using an algorithm to score individuals on how likely they are to be involved in a crime as a victim or perpetrator.


Re: catch-22
  • 6/22/2017 11:49:01 AM
NO RATINGS

Such isses are going to be fraught with lots of controversy for a long time I'm guessing. Calling into play all the moral issues of right and wrong, and the social issues of whether we should call out groups that may be shown by data to be vicitims or perpetrators will surely lead to some angry debates between proponents and critics of such programs.

Re: catch-22
  • 6/16/2017 4:53:19 PM
NO RATINGS

..

Stacy writes

Eliminating attributes for social reasons without at least vetting them introduces social bias into a mathematical algorithm that should value accuracy over political correctness, especially when the issue is crime.  Maybe such attributes are important, maybe not.  Let a mathematical selection algorithm make that decision. 

The main issue or problem in any kind of predictive model like this includes the selection of attributes, the data inputs, the weights applied in the components of the predictive algorithms, etc. In other words, humans design the mathematical algorithms, and their biases and assumptions, influenced by the context of societal experiences and personal reactions, can affect how the predictive model operates.

As I've described in previous discussions, I have personal experience of this in predictive modeling for urban and transportation planning, where assumptions and biases of the model designers did influence the type of models deployed and the "predictions" rendered by the models (which were interpreted as recommendations by decisionmakers). 

 

No Easy Answers When Both Views Have Merit
  • 6/16/2017 2:28:43 PM
NO RATINGS

From 6/20/14: www.allanalytics.com/author.asp?section_id=1828&doc_id=273803&tag_id=

 

catch-22
  • 6/14/2017 8:13:15 PM
NO RATINGS

The issue is also with the bias of ethics.  Eliminating attributes for social reasons without at least vetting them introduces social bias into a mathematical algorithm that should value accuracy over political correctness, especially when the issue is crime.  Maybe such attributes are important, maybe not.  Let a mathematical selection algorithm make that decision. 

Re: predicting crime
  • 6/14/2017 12:23:31 PM
NO RATINGS

Very interesting technology predicting crimes are important for adequate staffing etc. but the ultimate goals would be to use the data to prevent crimes. It is the situation we face with monitoring of possible terrorists we know they might commit a crime but we are not always able to stop them. How do we get to that point? The most recent London attacks highlighted that issue where a suspected terrorist was able to rent a truck the integration to prevention doesn't exist yet.

predicting crime
  • 6/14/2017 11:11:12 AM
NO RATINGS

Even before that show, I believe that was the theme of the 2002 film Minority Report. That cast such predictions in a very negative light, I believe (never saw it). In real life, even when people know of those that give indications of being dangerous, they don't seem to do anything about them. Certainly that was the case for the recent terror attack in London.

But what do you base your prediction on? People are often very off about that as those who came up with predictions of recidivism for prisoners facing parole have discovered. I wrote about that here a few years back in "Pedictive Analytics Head to Jail."

 

 

INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +