Prejudice in A.I.

Posted by Rennan Tilbury on Aug 29, 2019 8:30:00 AM
Rennan Tilbury

Over the past 25 years, computers have slowly encroached into the domain of human intellect, the machines first victory was in 1997 when Kasparov lost to Deep Blue at Chess1. Since then A.I. have continued to expand the games they can beat us at, from Go to Poker, and Shogi to Starcraft2. These achievements of machine learning, whilst shockingly impressive, are in stark contrast to the failures of A.I. in the real world.

Over the past 25 years, computers have slowly encroached into the domain of human intellect, the machines first victory was in 1997 when Kasparov lost to Deep Blue at Chess1. Since then A.I. have continued to expand the games they can beat us at, from Go to Poker, and Shogi to Starcraft2. Even Arimaa a game specifically designed to be difficult for computers to win at was mastered beyond any human in 20153. In 2017 Google’s “AlphaZero” algorithm became able to adapt to effectively win most any game after a scant few days of training. AlphaZero works on an incredibly simple concept, once the rules of the game have been input the computer will create thousands of simulations each playing slightly different sequences of moves against one another, the losing simulations are then killed off, and the winners spawn new variations of themselves to create a new generation. Within a few thousand generations (something that can be achieved shockingly quickly on modern hardware,) AlphaZero will have evolved a solution capable of beating even the most experienced of human players2.

These achievements of machine learning, whilst shockingly impressive, are in stark contrast to the failures of A.I. in the real world. A.I. put to work in hiring for Amazon demonstrated shocking levels of prejudice against women4. Whilst Google’s attempt at a chatbot “Tay,” was forced to be shut down mere days after release as it began tweeting racist, and sexist messages5. The contrast between the success of AlphaZero, and the failure of Tay and Amazon’s A.I. provides a useful look into the scenarios A.I. are well equipped to handle, and those that they aren’t.

AlphaZero’s ability to succeed comes from three factors:

  1. A controlled initial training set.
  2. A clear, simple, and correct goal.
  3. The ability to continually experiment.

In AlphaZero’s case the initial training set comprises the rules and possible moves of whatever game it will play, the goal is to win the game, and the ability to continually experiment is provided by the A.I’s ability to create software simulations of the game for itself. Tay possessed the ability to continually experiment, having been released onto Twitter to interact with other users, however Tay lacked a controlled training set, its training was instead provided by the aggregate of Tweets sent to it by other users, once Tay’s nature as an A.I. was exposed internet trolls continually bombarded her with tweets espousing racist and sexist narratives. As Tay lacked a controlled training set to know that these messages were bad, she instead incorporated them into her own speech and began to repeat them back leading to the catastrophic failure. Amazon’s A.I. lacked both a correct goal, and the ability to experiment. Whilst ostensibly the goal was to hire the best applicant for the job the reality of machine learning required that the A.I. be primed with a data set, this data set was the data of past applicants to Amazon’s software development positions, and the record of whether they had been successful or not. Given that Amazon’s software development had been dominated by men, and its hiring procedures had favoured men in the past, the A.I. began to penalise resume’s with references such as “women’s chess club” or those hailing from well-known all women’s colleges5. Unlike AlphaZero Amazon’s A.I. had to make decisions within the context of the real world, the machine is therefore unable to create simulations with which to train itself, instead it is forced to use data from real-life experiments in order to train itself. In any context in which systemic prejudice has heavily skewed data an A.I. trained on said data will repeat the prejudices of the past rather than moving beyond them.

This same issue reoccurs in several different contexts and manners. An A.I. put in charge of loan management may overwhelmingly reject applications from people of colour based on the historic prejudice of bank managers,6 or military drones may overwhelmingly select targets on specifically racist criteria. Even more insidiously however, our attempts to teach A.I. to use language has revealed another level of prejudice that A.I. are especially vulnerable to.

Vector learning

A.I. are taught language from huge data dumps, usually from Wikipedia or Google’s archives; millions of pages almost entirely written by average people. The A.I. then tries to understand how words relate to one another via a process known as vector learning. Vector learning in simple terms is the A.I. ascribing a certain characteristic to a word to get to a new word, as a simple example the A.I. may take the word “man” and applying the characteristic “unmarried” will receive the new word “bachelor.” The way we use language however is often explicitly problematic. For example, an A.I. learning to give careers advice may create the vector “medical professional,” such that if a potential interaction mentions medicine, or hospitals it can direct the applicant to a medical profession. The way we use language however creates a problem here, medical professional if applied to a man would give doctor based on the average language use of the population, whereas the same vector applied to a woman would instead give nurse. Two wholly different career paths would then be recommended to two functionally identical candidates based on a learned prejudice7.

The situation for machine learning is far from hopeless. Manual input can train A.I. away from prejudice, either by providing vetted training data or adding artificial weightings to counteract imperfections in the data set. But whilst the potential of A.I. is great it is deeply important for researchers in the field to remain vigilant and to remember that A.I. are far from infallible. A.I. are in fact in many cases more susceptible to bias and prejudice than humans, and it is the duty of anyone creating A.I. to limit and counteract these issues.

 ---

  1. https://www.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/
  2. https://newatlas.com/ai-2017-beating-humans-games/52741/
  3. http://arimaa.com/arimaa/
  4. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  5. https://www.techrepublic.com/article/why-microsofts-tay-ai-bot-went-wrong/
  6. https://nickbostrom.com/ethics/artificial-intelligence.pdf
  7. https://www.kcl.ac.uk/news/artificial-intelligence-is-demonstrating-gender-bias-and-its-our-fault

This blog post was written by Rennan Tilbury, contributor at InspiredMinds and a philosophy student currently reading philosophy at Durham University with a particular focus on Philosophy of Consciousness and Critical Theory. His work on A.I. focuses on ethical interactions both interpersonally surrounding A.I. and on ethical treatment of A.I.

----

Join over 6,000 attendees from 150+ countries at the world's leading AI summit, World Summit AI this October in Amsterdam. WSAI provides two days packed with phenomenal content and the ideal meeting place for international leaders to set the global AI agenda.

View the full programme, speaker line up and book tickets by visiting worldummit.ai


World Summit AI 2019
October 9th-10th
Amsterdam, Netherlands
worldsummit.ai

 

Topics: AI, AI Platforms, Data, Machine Learning