Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in an interview with Topbots: “For every company that I go to, I ask them, ‘First and foremost, what are your company’s core values? What is your mission? And is your artificial intelligence in line with that mission?’” These are questions that every single organization who has or will adopt AI should be asking themselves.
Artificial Intelligence is gaining ground quickly. Many organizations are far beyond the experimental phase. In fact, many tangible results have already been achieved in numerous industries. Positive outcomes include more accurate sales forecasts and increased productivity to more efficient customer acquisition.
Organizations are implementing as many best practices as possible, though the limits of what AI can do will not be reached any time soon. However, for you as a CIO or CTO, it remains a challenge to cash in on AI in a responsible and ethical manner. But how can you govern, monitor and 'raise' responsible AI?
Companies that are currently implementing successful AI projects do not only have their eyes on the benefits to their business. The ethical side of AI also receives much-needed dedication. Adopters of AI are rightfully paying attention to the effects on people within their own organization, customers and society as a whole. This is one of the main conclusions of the ‘AI Momentum, Maturity & Models for Success’ survey by Accenture, Intel, and SAS in collaboration with Forbes Insight, conducted among leaders of global businesses.
Responsible AI in practice
As there is a clear bearing on people’s lives, the ethical impact is a crucial aspect when developing new AI solutions. Of the surveyed organizations, 72 percent already use AI in one or more business domains. Most of these organizations offer ethics training to their technology specialists. However, the remaining 30 percent either do not offer this kind of training, are unsure if they do or are considering it. Moreover, only 63 percent of surveyed organizations utilizing AI have an ethics committee that reviews the use of AI.
These numbers show that although many companies are aware of the ethical impact of AI, there are still other organizations who are yet to jump on the bandwagon. Taking such steps towards the deployment of responsible AI is increasingly important for you and your business as incorrect output can have a multitude of desired and undesired consequences for your stakeholders.
"Make sure that your technologies are accurately, responsibly and ethically deployed."
For example, AI can cause biased decision-making that could result in the discrimination of certain populations. Word embedding, a known algorithm that processes masses of natural-language data, categorizes European American names as pleasant while characterizing African American ones as unpleasant. In September, an AI-powered customer service bot that assists customers with flight-booking for WestJet, a Canadian airline, misdirected a customer to a suicide prevention line in response to a happy review. And a month later, Reuters reported on Amazon terminating its AI-based recruitment tool that ruled out women as suitable candidates for tech positions.
These examples show that bias has been such an inherent part of our society for such a long time that when artificial intelligence analyzes past decisions, it will be biased due to our historic decision-making. In the case of Amazon, AI concluded that male wording was preferred when hiring tech employees. Besides gender bias, racial bias is also a common downside found in AI solutions. John Giannandrea, former Google AI chief and current Chief of Machine Learning and AI Strategy at Apple, hit the nail on the head last year when he said about AI and machine-learning algorithms: "The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased." However, now "we can actually use AI to force ourselves to do what we previously chose to ignore,” Chowdhury said about the way forward.
As the AI revolution continues to accelerate, you will have to make sure your technologies are accurately, responsibly and ethically deployed.
Despite the possible restraints, there is a significant awareness of potential risks, applying only ethical guidelines for AI use is insufficient. To prevent any undesirable outcomes, you will need to have implemented specific technical guidelines to make sure that the AI systems are safe, transparent and verifiable to protect your employees, clients, civilians, and organizations.
Oversight is necessary
Implementing processes for reviewing the outputs of AI systems is vital to ensure ethical conduct. Artificial intelligence cannot function independently of human intervention. AI leaders understand this and nearly three-quarters of them reported careful oversight, evaluating the results at least once a week. On the other hand, only 33 percent of less successful AI adopters believe that oversight is necessary.
Additionally, almost half of AI frontrunners stated that their organization has a process for extending or overriding results that seemed controversial during review. Only 28 percent of less successful adopters of AI report having such processes in place.
These percentages indicate that the oversight processes of all AI adopters still need a considerable amount of work. “Although we are still in the very early phases of AI, the technology is already well ahead of the marketplace when it comes to the processes and procedures organizations have in place to provide oversight,” says Oliver Schabenberger, COO and CTO of SAS. He believes that "the technical capabilities (of AI) are ahead of our ability to cope with the technology.”
Employers vs. employees
While employees are apprehensive about the stability of their jobs due to the growth AI provides, employers don’t seem to feel the same way. AI leaders believe that instead of replacing humans, artificial intelligence extends and enhances human activities. Nearly 60 percent of respondents either strongly or completely concurred with the statement “we do not anticipate any impact on jobs due to AI’s implementation.”
This is in line with the fact that 64 percent either strongly or completely believe that they are already seeing job roles being elevated, and not replaced, as a result of AI.
Although many organizations believe that AI will benefit the workforce, 57 percent are still concerned about the influence of AI on employee and client relations. They worry that employees may feel threatened or strained. This concern can be alleviated if more education and awareness about the real impact of AI on the workplace is raised.
AI will transform the relationship between people and technology.
“We believe AI will transform the relationship between people and technology,” said Athina Kanioura, Chief Data Scientist for Accenture Applied Intelligence. “The real excitement lies with new jobs on the horizon, such as ‘explainers’ who will be responsible for making AI explainable, or ‘trainers’ who will have responsibility for directing the development of AI systems so that they perform at a higher level.”
AI will indeed transform the relationship between people and technology. But contrary to what many believe, this disruption has and will continue to create numerous benefits to society as a whole. This is not to say, however, that no obstacles will have to be cleared along the way.
Setting up your responsible AI framework
Companies should no longer be asking themselves “Shall we deploy AI?” but instead be asking: “How fast can we do it?” The same can be said about dealing with the risks and ethical aspects of AI. It’s a challenge to maintain or accelerate the pace of the development, let alone establish trust with all the parties involved.
“Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” summarized Accenture's Rumman Chowdhury. “[...]However, organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’. They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society. Data scientists are hungry for these guidelines.”
A flawless information provision, transparency, and ethical frameworks are essential in overcoming these challenges. Now that nothing can stop the rise of artificial intelligence, the time has come to set up and implement your framework to make sure that everybody can responsibly benefit from AI.