The First Rule of Data: Do No Harm


When it comes to medicine, the first rule of ethics is, do no harm. Mathematician and data scientist, Cathy O'Neil, said during her keynote at Strata Hadoop Conference in New York recently, that the same first rule should apply when building algorithms.

"Algorithms and AI are not objective," she said. "They're opinions embedded in code." O'Neil's talk, which took the title from her book, Weapons of Math Destruction, was about how even with good intentions, data scientists can create toxic algorithms that end up doing harm instead of good. That can mean failure in trying to solve a problem, like keeping good teachers or hiring fairly.

When it comes to how we create algorithms, said O'Neil, "we haven't established standards on what is good enough."

One way to correct this issue is by examining the laws and regulations that govern each industry, O'Neil says.

"It's one thing to build an algorithm that you find useful, like how much to exercise, but if you are building an algorithm that's used widely for important decisions that are already regulated, then you have to ask the question: What is the industry set standard for this and how do I make sure I'm meeting that standard?"

(Image: Kheng Guan Toh/Shutterstock)

(Image: Kheng Guan Toh/Shutterstock)

If this issue is ignored, O'Neil says that we can expect to see the inequality gap widen, and our democracy dissolve.

[Read the rest of this post at InformationWeek.com]

Emily Johnson, Associate Editor,UBM America's Content Marketing team

Emily Johnson is an Associate Editor on UBM America's Content Marketing team. Prior to this role, Emily spent four and a half years in content and marketing roles supporting the UBM America's IT events portfolio. Emily earned her B.A. in English from the University of California, Berkeley. Follow her on Twitter @gold_em.

Leveraging Data Prep to Prevent Human Trafficking

Data prep is important in analytics. Here's how one organization is using a data prep tool to combat human trafficking and slavery.

The First Rule of Data: Do No Harm

Are society's prejudices and biases creeping into algorithms? Data scientists must do their best to prevent this.


Re: Cultural Bias At Work
  • 11/6/2017 7:26:20 AM
NO RATINGS

It is probably true that where there is hidden information that only certain parties have, there's not going to be a fair or level playing field. I many years ago read a book that advised to get ahead make sure you know something that no one else knows, and therefore can take advantage of that information. Do no harm? A good philosophy to work towards, but not so easy in a practical world sense bearing in mind that there are still those who don't believe in it.

Re: Cultural Bias At Work
  • 10/30/2017 11:39:38 PM
NO RATINGS

Broadway, we're in full agreement. The real world is what matters, not idealistic economic concepts. The free market doesn't really exist after all.

Re: Cultural Bias At Work
  • 10/30/2017 11:01:48 PM
NO RATINGS

rbaz, to be honest, I don't really believe the free market is ever really free. It only works when consumers have multiple choices and transparency of price and other qualifications. This rarely happens in the real world, either because of monopolistic practices, government favoritism, collusion or corruption, etc. The healthcare business is another great example. Good luck to consumers there too!

Re: Cultural Bias At Work
  • 10/30/2017 5:21:33 PM
NO RATINGS

Broadway, you make a good point on free market choice and emphasize the need for options. Well I do live in a town that has one cable service. I have one option with one choice. Free market isn't exactly free.

Re: Cultural Bias At Work
  • 10/29/2017 9:07:30 PM
NO RATINGS

PredictableChaos, true point. If someone doesn't have any option but submit to the algorithm, then they are as screwed as that felon about to be sentenced (or that wrongly convicted prisoner about to be sentenced). But as you illustrate yourself, the position is not so different than what people have found themselves in for hundreds, thousands of years. Heck, live somewhere where there's only one cable company?

Re: Cultural Bias At Work
  • 10/28/2017 10:38:14 AM
NO RATINGS

The free market is good for many things, but it may not help if you have no visibility into why the algorithm is providing an undesirable outcome for you.

And, like in the case of the prisoner receiving a sentence, sometimes you may have no alternatives.

Re: Cultural Bias At Work
  • 10/28/2017 12:02:21 AM
NO RATINGS

Can't debate? What about the free market system? We can vote with out feet. Don't like the algorithm on a social media platform? Get off it? Don't like the algorithm that decides you didn't get a raise? Get a new employer?

Re: Cultural Bias At Work
  • 10/27/2017 10:14:33 PM
NO RATINGS

Kq4 writes that

... defining just what is "good" may be more tricky than most might imagine. Doing no harm might not be as easy as we'd like. But of course if there's existing regulations, we will have to follow those, under the assumption that someone else higher up has made the decision for us as to what is "good."

In her article, Emily Johnson writes:

At the Strata Big Data Conference in New York, one of the major themes was the responsibility that data scientists have to do their best to prevent the biases and prejudices that exist in society from creeping into data and the way algorithms are built.
When it comes to medicine, the first rule of ethics is, do no harm. When mathematician and data scientist, Cathy O'Neil, spoke during her keynote at Stata Big Data Conference in New York recently, she said the same first rule should apply when building algorithms.

"Algorithms and AI are not objective," she said. "They're opinions embedded in code." O'Neil's talk, which took the title from her book, Weapons of Math Destruction, was about how even with good intentions, data scientists can create toxic algorithms that end up doing harm instead of good.

"Do no harm"? There is a huge, and I mean enormous, chunk of the AI development industry that is striving to develop killer robots and other AI-based weapons of mass destruction. DARPA, anyone? 

These weapons have gotta be packed with algorithms laser-focused on doing harm. And incorporating biases? How about the bias to seek and destroy whatever the programming developer perceives as undesirable, or worthy of destruction? 

Surely this R&D is producing a mother lode of frighteningly harmful algorithms, based on a mother lode of biases.

 

Re: Cultural Bias At Work
  • 10/26/2017 8:56:01 AM
NO RATINGS

And defining just what is "good" may be more tricky than most might imagine. Doing no harm might not be as easy as we'd like. But of course if there's existing regulations, we will have to follow those, under the assumption that someone else higher up has made the decision for us as to what is "good."

Re: Cultural Bias At Work
  • 10/26/2017 8:55:57 AM
NO RATINGS

I would have a real problem with that algorithm, the US prides itself on valuing the individual.  Yes an individual my have committed a crime but unless they were part of a mob committing a crime then I feel that only their actions should be considered during sentencing.  Two people standing trial for the same crime can have committed the crime in very different ways.  For instance writing a bad check for more than $300 can get you a grand theft charge.  If you compare that to the theft of retirement accounts at Enron and the same charges leveled against players in that scheme.  I wasn't aware that we had courts handing down sentences this way and to me it seems like that is against the very spirit of our justice system. 

Page 1 / 2   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +