Algorithms' Dark Side: Embedding Bias into Code

(Image: Mmaxer:Shutterstock)

(Image: Mmaxer:Shutterstock)

Does the shift toward more data and algorithmic direction for our business decisions assure us that organizations and businesses are operating to everyone's advantage? There are a number of issues involved that some people feel need to be addressed going forward.

Numbers don't lie, or do they? Perhaps the fact that they are perceived to be absolutely objective is what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome.

That's the argument Cathy O'Neil makes in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. While we tend to think of big data as a counterforce to biased, just decisions, O'Neil finds that in practice, they can reinforce biases even while claiming unassailable objectivity.

"The models being used today are opaque, unregulated, and incontestable, even when they're wrong. The math destruction posed by algorithms is the result of models that reinforce barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc.

Now the organizations and businesses that make those decisions can point to the authority of the algorithm and so shut down any possible discussion that question the decision. In that way, big data can be misused to increase inequality. As algorithms are not created in a vacuum but are born of minds operating in a human context that already has some set assumptions, they actually can extend the reach of human biases rather than counteract them.

"Even algorithms have parents, and those parents are computer programmers, with their values and assumptions," Alberto Ibargüen, president and CEO and of the John S. and James L. Knight Foundation wrote in a blog post. ". . . As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more."

The foundation's VP of Technology Innovation, John Bracken, told me about the foundation's partnership with the MIT Media Lab and the Berkman Klein Center for Internet & Society as well as its work with other individuals and organizations to create a $27 million fund for research in this area. The idea is to open the way to "bridging" together "people across fields and nations" to pull together a range of experiences and perspectives on the social impact of the development of artificial intelligence. AI will impact every aspect of human life, so it is important to think about sharpening policies for the tools to be built and how they will be implemented.

The fund, which is to be open for applicants even outside the founding university partners, may be used for exploring a number of issues identified including these:

  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI's potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?

Independently of the organizations involved in the fund, the Association for Computing Machinery US Public Policy Council (USACM), has been doing its own research into the issues that arise in a world in which crucial decisions may be determined by algorithms. It recently released its take on what businesses should use for guidance in its its Principles for Algorithmic Transparency and Accountability.

The seven principles listed include awareness of "the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society;" the possibility of an audit of the data, algorithms, and models that were involved in a decision that may have been harmful; the possibility for "redress for individuals and groups that are adversely affected by algorithmically informed decisions;" as well as the obligation for the organization to provide explanations of their processes and accountability for their validity.

As we go forward in incorporating even more algorithms into the daily functions of business and other organizations, we will have to be mindful about the potential impact of decisions that may not be as objective as we assumed them to be. Better data doesn't automatically translate into better results, and we have to be aware of potential problems if we are to address them.

Ariella Brown,

Ariella Brown is a social media consultant, editor, and freelance writer who frequently writes about the application of technology to business. She holds a PhD in English from the City University of New York. Her Twitter handle is @AriellaBrown.

Algorithms' Dark Side: Embedding Bias into Code

Do algorithms and AI eliminate bias or do they encode the biases of models? New work on AI policies is designed to shine the light on the black box of model design and use.

Machine Learning Tackles Cyberbullying

Social media and mobile devices may seem like a vast and scary unknown to parents of young children with phones and other devices. Now machine learning is being applied to the problem of protecting the kids.

Re: Adding it, too
  • 2/22/2017 4:38:33 PM

@SethBreedlove Oh, yes, I've been there. Even when you point out that said "policy" seems to apply to some people but not to others, they will not admit that people are the ones making the decision about application.

Re: Adding it, too
  • 2/22/2017 4:37:14 PM

@Jessica @Zimana I hope you enjoy reading it. She also has a few articles avilable online.

Re: Adding it, too
  • 2/22/2017 4:13:05 PM

Re " "Even algorithms have parents, and those parents are computer programmers, with their values and assumptions"  Very important point:  Algorithems appear to faceless but they are products of other people's motivations.  

It kind of reminds of when I speak to a company and I hear someone say "That's our policey." to which my response is "Your policey is what ever you say it is."  The same thing is true for algorithms.  The results are what ever it was programmed to be. 

Bias, Profiling, Predictive Analytics
  • 2/22/2017 3:12:02 PM

Great topic for discussion and thought.  Some biases are intended.  Credit card companies intend bias against deadbeats.  Police intend bias against criminals.

Unintended bias are many.  Insurance companies red-line because actuaries make decisions based on false or irrelevant data.  Cherry picking is the opposite of bad data.  Most relevant data is excluded so decisions are steered to the preferred winner.  Both sides of the global warming debate love to cherry pick.

We outlaw generalizing that all Black people will not qualify for a loan just because a higher percentage have incomes below the threshold to qualify.  Racism overlaps profiling.

Predictive analytics is profiling.  When is it ethical to profile?  When is it immoral?  When a fleeing murderer is reported to be Pacific Asian about 5 foot 5 slight build south bound on 5th Ave, it is reasonable to stop for questioning people who approximate that description in that area.  But it is not reasonable to stop those who are 6 foot, fat or north bound.   To haul the person into the station for more questioning requires a larger burden of proof.  To arrest the person requires an even larger burden of probability.  And to convict the person requires the highest level of probability.

What is true of profiling a murder is also true of  employment, loans and all financial transaction.

What is interesting is the unintended consequences of the government imposing anti-discrimination (anti-profiling) laws.  The more the laws expand to cover ever more groups, the more the actions of humans exist to either game the system to their advantage or to avoid the system.  As a consultant I bounce from gig to gig and have many interviews.  Every interview I've ever had requires a college degree, which I lack.  Yet that has never been the reason to approve or reject me.  So why is college degree listed as a requirement for a job?  For the same reason that experience in a variety of skills is allegedly a requirement. 

Rarely does a new hire meet all requirements.  They are almost always waived.  Requirements exist so a qualified candidate can be rejected for reasons nobody wants tosay officially.   Qualified candidates for back office jobs are rejected for body odor, bad breath, uglyism and reasons that have zero to do with job performance.  Of course, on occasion candidates are unlawfully rejected due to race, religion, etc.

In conclusion, predicitive analytics, profiling, is inherently discriminatory.  When an individual has an X percent chance of being undesireable he is rejected.  That means the 100 – X  face unfair discrimination.  That is the essence of predictive analytics. 

When discrimination (predictive analytics) is used for voluntary activity, it is ethical.  When commercial ads or political news targets those who meet a certain profile it is ethical.  When government tells us what we must and must not profile we need to tread cautiously.    The ugly are the most discriminated against demographic group in the US.  Should they be added to the ever growing list of special people? Then next for short people?   At some point we need to recognize that life will always be unfair;  that laws and the use of government coercion cannot solve every problem.

Adding it, too
  • 2/22/2017 2:46:43 PM

I remember reading about this book when it came out. I just downloaded it to my Kindle. Excited to read it, too.

Just heard about the book
  • 2/22/2017 2:19:22 PM

I just heard about the book Weapons of Math Destruction through a friend who used it for a reference in his book. I am looking forward to reading it for my writings as well.

<<   <   Page 5 / 5