Ethics of AI: A Data Scientist's Perspective

Posted by QuantumBlack on Apr 13, 2020 12:17:34 PM

From the news we consume to our suggested Netflix content, algorithms are becoming increasingly influential on our day-to-day existence. But as more sectors recognise the economic potential of machines at large-scale application, the impact of technology is becoming far greater.

Today, machine learning models are frequently applied to guide decisions in areas such as assessing candidate risk in loan applications. Suddenly, a computer’s decisions have become far more than simply which films we’re most likely to enjoy. However, we often find that increasingly complex machine learning models lack ‘explainability’ — the professionals depending on them can’t interpret them.

 

When these decisions could potentially have a big impact, “The black box said to do it” is simply not enough justification. This gap in understanding increases the risk that these models may suffer from algorithmic bias, as there is no human oversight to counteract unfairness in the process. This can result in the machine making an unjust or even discriminatory decision based on ‘protected’ features such as race, religion or sex.

As the adoption and influence of this technology grows, the role of a data scientist has never been more crucial. Maintaining fairness in technology-driven decisions is key — so how do we build and maintain modelling which is aware of bias and actively mitigates against it?

How can machines discriminate?

First, it’s important to define the three different ways that algorithmic bias can manifest in models. Take loan underwriting, for example:

 Direct discrimination can occur when an algorithm penalises a protected group based on sensitive attributes, such as race or sex, or based on a specific feature commonly possessed by a protected group. For example, the loan underwriting algorithm may judge ethnic minority or female applicants as a higher loan risk than a caucasian male applicant based on their ethnicity directly or on their zipcode (part of city where they usually reside).

 Indirect discrimination occurs when an algorithm consistently produces a disparate outcome for a protected group even though it does not overtly make use of obvious group features. For example, the underwriting model may be disproportionately granting loans to men based on the applicant’s higher income.

 Individual discrimination occurs when an algorithm treats an individual unequally to another, despite both possessing similar features — e.g., an ethnic minority candidate and a caucasian candidate are matched in age, profession and income, yet the model concludes the ethnic minority.

So what risk mitigation techniques and controls can data scientists implement when building models to ensure potential bias is managed? At QuantumBlack, we deploy a three-part framework to detect, diagnose and handle this.

Detect

To work out whether our models have the potential for discrimination, we first work out what metrics we’re measuring. We ask ourselves questions like: What are the potential sources of bias, is there a risk in the dataset you’re applying, is there a risk in the features we’re measuring? With loan underwriting, the model may not deliberately be tasked with considering gender, but if income is being measured then there is a risk of different outcomes for men and women.

Data scientists should apply robust detection techniques and predetermined thresholds to determine whether their models are at risk of producing unequal treatment or outcomes based on these measurables.

There is a range of criteria to measure for, but fundamentally data scientists must ask whether their model explicitly penalises those who possess sensitive features, such as specific racial groups. Does it provide equal odds and opportunity for each subgroup, and does it apply the same decisions for individuals with similar features to one another? Your model may be very accurate, but erroneous results do occur — are these errors falling disproportionately on one particular group? For example, situation testing models and calculating the mean difference between outcomes can be useful in determining whether the dataset being used is inherently biased towards a protected group.

Diagnose

To address any identified unequal treatment or outcomes, it’s crucial to understand why this has happened — is your model overly dependent on the input of protected attributes? In other words, are inputs such as race or religion playing an important role in the computer’s decision?

When dealing with complex models, the first step in understanding the root cause of any bias is to understand how the algorithm is functioning (i.e making the model ‘explainable’). A range of frameworks are available here. For example, SHAP and LIME will measure and explain the impact of particular features on the model’s outcome. Both can be applied to explain the outcome of any kind of model, making them popular options — however, they are also computationally expensive and so require a significant degree of time and processing power to execute.

As well as explainable techniques, data scientists can also deploy counterfactual analysis — put simply, this tests whether outcomes are different when a particular feature is modified.

Handle

Once the risk of unequal treatment or outcomes is identified in the current model, how do we mitigate against them?

· Pre-processing — we can address bias in our datasets before the model even the model estimation, using fair representation techniques, such FVAE-Fair Variational Auto-Encoder, to remove or minimise sensitive information while retaining and preserving the non-sensitive information in the data.

· In-processing — we can also incorporate techniques to address bias as the model is processing data, instructing the model to penalise underlying parameters and results which are not fair. For example, Decision Tree Learning tasks the model with weighing the importance of each feature by whether it contributes most to fairness or to accuracy, setting the priority to maximising a fair outcome. Fair SVM adds constraints to the model by setting a maximum threshold for fairness and accuracy, so the model strives to produce results that are both fair to all protected groups and accurate.

· Post-processing — finally, we can address bias after the model has processed data by refining the results themselves. For example, we seek to obtain well-calibrated model scores by processing the raw model’s outcomes to ensure calibration for all protected groups. Additionally, post-processing approaches optimally adjust the threshold selections by maximizing a number of fairness measures (including statistical and classification parities).

Trade-off: Accuracy versus fairness

Of course, tweaking a model — by tinkering with the data, the processing phase or the outputs — will impact accuracy. Models must strike a balance between equal opportunities for all and productive, accurate results. This unavoidable trade-off is the subject of ongoing debates in data science and there are no easy answers. Fundamentally, we simply cannot build systems that make decisions based on or even influenced by attributes such as race or gender. Not only is it illegal, but it violates our human rights and directly contradicts the progression that technology has enabled in all aspects of day-to-day life since its widespread adoption.

Holistic organisational approach

The ethics of AI bias will continue to be debated for years to come, and not just in data science. A business cannot successfully manage bias without a multidisciplinary approach. A wide range of departments, from design and operations to legal, must be consulted to ensure models and outputs are producing accurate, useful and fair results. Data scientists may form part of the frontline battling algorithmic bias — but they will fail unless the wider business world plays an active role in mitigating against unfair AI decisions.

 

 

GLOBAL AI EVENTS CALENDAR


Here is your Global AI Events Calendar where you can meet the Inspired Minds community of business leaders, heads of government, policy makers, startups, investors, academics and media.

 

NEW! WORLD SUMMIT AI WEBINARS

worldsummit.ai/webinars

 

NEW! INTELLIGENT HEALTH AI WEBINARS

intelligenthealth.ai/webinars

 

NEW! INTELLIGENT HEALTH INSPIRED!

25-27 May 2020

Online

london.intelligenthealth.ai/inspired

 

INTELLIGENT HEALTH 

09-10 September 2020

Basel, Switzerland

intelligenthealth.ai

 

WORLD SUMMIT AI

13-14 October 2020

Amsterdam, Netherlands

worldsummit.ai

 

WORLD AI WEEK

12-16 October 2020

Amsterdam, Netherlands

worldaiweek.ai

 

INTELLIGENT HEALTH UK

2-3 February 2021

London, UK

london.intelligenthealth.ai

 

WORLD SUMMIT AI AMERICAS

20-21 April 2021

Montreal, Canada

americas.worldsummit.ai

 

Topics: AI, AI ethics, Data Science