Algorithms' Dark Side: Embedding Bias into Code


(Image: Mmaxer:Shutterstock)

(Image: Mmaxer:Shutterstock)

Does the shift toward more data and algorithmic direction for our business decisions assure us that organizations and businesses are operating to everyone's advantage? There are a number of issues involved that some people feel need to be addressed going forward.

Numbers don't lie, or do they? Perhaps the fact that they are perceived to be absolutely objective is what makes us accept the determinations of algorithms without questioning what factors could have shaped the outcome.

That's the argument Cathy O'Neil makes in Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. While we tend to think of big data as a counterforce to biased, just decisions, O'Neil finds that in practice, they can reinforce biases even while claiming unassailable objectivity.

"The models being used today are opaque, unregulated, and incontestable, even when they're wrong. The math destruction posed by algorithms is the result of models that reinforce barriers, keeping particular demographic populations disadvantaged by identifying them as less worthy of credit, education, job opportunities, parole, etc.

Now the organizations and businesses that make those decisions can point to the authority of the algorithm and so shut down any possible discussion that question the decision. In that way, big data can be misused to increase inequality. As algorithms are not created in a vacuum but are born of minds operating in a human context that already has some set assumptions, they actually can extend the reach of human biases rather than counteract them.

"Even algorithms have parents, and those parents are computer programmers, with their values and assumptions," Alberto Ibargüen, president and CEO and of the John S. and James L. Knight Foundation wrote in a blog post. ". . . As computers learn and adapt from new data, those initial algorithms can shape what information we see, how much money we can borrow, what health care we receive, and more."

The foundation's VP of Technology Innovation, John Bracken, told me about the foundation's partnership with the MIT Media Lab and the Berkman Klein Center for Internet & Society as well as its work with other individuals and organizations to create a $27 million fund for research in this area. The idea is to open the way to "bridging" together "people across fields and nations" to pull together a range of experiences and perspectives on the social impact of the development of artificial intelligence. AI will impact every aspect of human life, so it is important to think about sharpening policies for the tools to be built and how they will be implemented.

The fund, which is to be open for applicants even outside the founding university partners, may be used for exploring a number of issues identified including these:

  • Ethical design: How do we build and design technologies that consider ethical frameworks and moral values as central features of technological innovation?
  • Advancing accountable and fair AI: What kinds of controls do we need to minimize AI's potential harm to society and maximize its benefits?
  • Innovation in the public interest: How do we maintain the ability of engineers and entrepreneurs to innovate, create and profit, while ensuring that society is informed and that the work integrates public interest perspectives?

Independently of the organizations involved in the fund, the Association for Computing Machinery US Public Policy Council (USACM), has been doing its own research into the issues that arise in a world in which crucial decisions may be determined by algorithms. It recently released its take on what businesses should use for guidance in its its Principles for Algorithmic Transparency and Accountability.

The seven principles listed include awareness of "the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society;" the possibility of an audit of the data, algorithms, and models that were involved in a decision that may have been harmful; the possibility for "redress for individuals and groups that are adversely affected by algorithmically informed decisions;" as well as the obligation for the organization to provide explanations of their processes and accountability for their validity.

As we go forward in incorporating even more algorithms into the daily functions of business and other organizations, we will have to be mindful about the potential impact of decisions that may not be as objective as we assumed them to be. Better data doesn't automatically translate into better results, and we have to be aware of potential problems if we are to address them.

Ariella Brown,

Ariella Brown is a social media consultant, editor, and freelance writer who frequently writes about the application of technology to business. She holds a PhD in English from the City University of New York. Her Twitter handle is @AriellaBrown.

Hiring Trend: Machine Learning Reduces Bias, Increases Applicant Pool

Unilever is using machine learning tools to increase the pool of entry level applicants and then test and evaluates candidates in the pool to cull those who are the best fits. Adding machine learning tools reduces bias, increases diversity, and makes the whole process more efficient, according to Unilever.

Analytics and School Attendance: A Laundry Story

Appliance maker Whirlpool suspected there was a correlation between student attendance and access to clean clothes. Here's the story of how the company placed washers and dryers in schools and tracked the difference it made in student performance.


Re: Biased assumptions in algorithms
  • 3/1/2017 3:25:54 PM
NO RATINGS

..

Ariella asks


Do you think that assumption may have been due to the bias the designers had about their own travel habits?


 

That was (and possibly still is) likely one factor. But the social context of the time probably also was a major factor. In the 1970s, the USA was just creeping out of the Transit Holocaust era (when the country's urban and regional transit infrastructure had been almost totally devastated, mainly by official policy and fiat) and traditional neighborhoods and historical structures were being systematically destroyed (something that Jane Jacobs campaigned against in her groundbreaking book The Death and Life of Great American Cities).

I'm nervous that a very similar "context" of adoration of a new transportation invention (robot cars) may be creeping back over the country, with associated impacts for the structure and habitability of our cities.

..

Re: Just heard about the book
  • 3/1/2017 10:02:03 AM
NO RATINGS

@Seth @Michelle, Though the bias inherent in survey wording is a bit different than the problems of algoirthms, it is also something people should be aware of. I foudn this article on it .supersimplesurvey.com/blog/post/7-Tips-To-Minimize-Response-Bias

Re: Just heard about the book
  • 2/28/2017 10:17:27 PM
NO RATINGS

@Seth I thought of survey design too. It does seem to a more difficult task to take bias out in this context, however.

Re: Just heard about the book
  • 2/28/2017 7:05:37 PM
NO RATINGS

@ kq4ym  - That's a very good suggestion.  I've taken survery design courses that teach how to ask questions that are not biased.   I would imagine it would be harder to do in coding though because in surveys you are looking for information while in coding usually you are aiming for a specific result.

Re: Just heard about the book
  • 2/28/2017 5:57:19 PM
NO RATINGS

@rbaz Indeed. Marketers always aim at particular targets, identifying what traits go into the populations that may buy what they're selling.

Re: Just heard about the book
  • 2/28/2017 3:50:02 PM
NO RATINGS

For some time I've advocated consulting with sociologists, psychologists, ethicists and others who may have some valid input into how we propose our questions, analyze our data, and implement policies that will better lead to solutions that are non-discriminatory and valid using the science of behavior that we have at the time. Bringing various views into the mix can certainly lead to more rational decision making and maybe be less prone to errors.

Re: Just heard about the book
  • 2/28/2017 11:59:16 AM
NO RATINGS

@Ariella, you are absolutely right. Targeting consumers based on race or ethnicity has always been done and is effective. Nothing inherently wrong with that, just denial of service or steering away intentionally is concerning and wrong.

Re: Just heard about the book
  • 2/28/2017 9:23:44 AM
NO RATINGS

@impact While that sound fairly straightforward, I'd think working it out would be quite a complicated affair.

Re: Biased assumptions in algorithms
  • 2/28/2017 9:23:00 AM
NO RATINGS

<The original models I scrutinized tended to generate more trips, more travel, but the algorithms were designed on assumptions that every household was in a low-density neighborhood (inner-city or suburban) and would be using cars for almost all trips. Also, the modal-split algorithms (deploying logit functions) seemed to incorporate the assumption that, for almost every traveler, you would have to pry their car from their cold, dead hands before they would ever consider boarding a bus or train.

So the planning models did predict huge travel growth, but almost entirely by personal motor vehicle, thus generating huge traffic volumes and implying the need for more and more roadway capacity (which, when actually constructed, then encouraged more traffic, thus fulfilling the models' predictions).>

@Lyndon_Henry I see. Do you think that assumption may have been due to the bias the designers had about their own travel habits?

Re: Just heard about the book
  • 2/27/2017 10:33:18 PM
NO RATINGS

When the math is found to be discriminatory we need to address the issue by modeling to address the bias. While it seems like manipulation its more of a correction.

Page 1 / 5   >   >>
INFORMATION RESOURCES
ANALYTICS IN ACTION
CARTERTOONS
VIEW ALL +
QUICK POLL
VIEW ALL +