Conversations about AI are never complete without one word: ‘ethics.’ From exploratory discussions surrounding the famous Trolley Problem to more practical plans around data diversity, there are many considerations required to build and use AI responsibly in business and broader society.
We spoke to Laetitia Cailleteau, Global Managing Director, Conversational AI at Accenture, to find out how one of the most influential consultancy firms thinks about responsible AI, both internally and in discussions with their many global clients.
“One of the key challenges we see is that companies want to do the right thing, but they don’t have clear a reference framework for what ethical AI means in their use case or context, or guidance on how they can implement it. This holds them back from implementing AI solutions,” Cailleteau explains. They worry about potential unintended consequences and the impact they may have on their brand and workforce. “The real-world application of AI has presented new or accelerated challenges to the ethical use of technologythat we didn’t see 10 years ago.. We are starting to see the emergence of solutions to tackle these challenges, including our own Responsible AI offering and tools. Everyone is learning as they are doing it and in different contexts .”
So what are organisations to do without this reference framework for ethical practice?
For Cailleteau, ensuring that an AI algorithm and its underlying data is as unbiased and representative as possible is essential. “If you use AI for credit scoring based upon historical data then you must ensure that this data is not biased. As we know, historically, more men have applied for mortgages than women, so you must ensure this historical data does not result in bias against women. If you train your algorithm based upon data from an affluent city, you would need to ensure that you are not biased against people from less wealthy areas. There are a number of things you must look at to ensure that you have a transparent and fair outcome but it also depends on the context. For example, checking for unwanted bias against women is valid in a credit score context, but it is less relevant in marketing for clothes, where you would want the system to know that you are a woman to ensure the products fit you.”
To ensure that companies, governments and other organisations working with datasets and AI algorithms, can limit bias, Cailleteau believes that there are questions that can be asked and techniques employed. “I think the easy example is the male-female one, as typically people have data around your sex. In terms of the credit scoring example, did you take this data in your training set? Did you try to balance it to train your algorithm? Say in history, there’s an 80-20 male to female split in taking out mortgages. Do I ignore this data, or try to balance it? You need to find techniques to make sure that there is no bias.”
Unconscious bias, however, is a more difficult problem to tackle. Where the norms of society have embedded bias into every one of us, it’s becoming the common belief we must be vigilant in not transferring those inherent human biases over into AI systems. But that’s easier said than done. “There are millions of biases; sex, socioeconomic, age. The history of bias is found in our brain due to the fight or flight instincts, as it simplifies information. As we grow up, many factors such as education and upbringing change us, but the brain simplifies this. If you have a bad experience with a certain type of people, you may be biased against them in the future. You must be aware of your bias.. There are many exercises we can do to find out our biases, to detect our brain’s biases from these simplifications. Once you are aware of these, you have to try and counter these.”
For Cailleteau, building responsible AI is about building with transparency, in good conscience and care: “There is a quote that says ‘the best way to shape your future is to create it,’ and I think if you aren't creating your future, you will miss out on the tremendous potential of this technology. We still need to do it with a lot of due diligence. It is important for companies to be clear on their ethical values and principles for their business, and create the right governance to manage day to day AI projects to make sure they are aligned with those values in an auditable and traceable way.”
And it isn’t just about building AI with the company values in mind, but building for a consumer base that expects ethical behaviour across the board. As Cailleteau puts it, “I think we must be transparent about our internal kitchen when it comes to AI, as the end consumer will probably ask companies, or won’t buy from brands unless they are transparent going forward…So I think there is no choice but to be part of it and it will even become a competitive advantage going forward. ”