Robust and secure artificial intelligence: A primer for the C-suite

Posted by PwC on Sep 2, 2019 9:45:00 AM

So you’ve invested in artificial intelligence (AI). The first questions your board may ask will likely be related to what it can do, how will it improve business processes, save money or provide greater experiences for customers. However to be responsible there are two questions that should be asked first...

Key takeaways

  • Using artificial intelligence in business has many benefits however there are issues relating to security and capability that executives should be aware of.
  • AI systems must be capable of working under a variety of circumstances and still produce replicable, reliable results.
  • The security of AI — and the IP of the business — should address the safety of the system to both manipulation and misappropriation.

So you’ve invested in artificial intelligence (AI). The first questions your board may ask will likely be related to what it can do, how will it improve business processes, save money or provide greater experiences for customers. However to be responsible there are two questions that should be asked first:

Will it perform as intended at all times?

Is it safe?

When we speak of robustness in AI, we’re talking about its performance ability. Can the system perform consistently under dynamic circumstances? Security, on the other hand, is about the safety of the technology itself, whether it is protected from abuse or appropriation.

These aren’t just technical challenges for data scientists; they’re business problems. As such, any business strategy around the use of AI needs to include the testing of its robustness and security — before deployment. Doing so will allow potential weaknesses to be identified and decisions around any required mitigation or remediation to be taken to executive leadership. Solutions may demand compromise, perhaps even of a desired attribute of the system itself.

For example, a complex AI model might do a great job of predicting the probability of approval of a credit limit increase for a large proportion of customers. This is a useful tool for targeting offers and business development opportunities. But if the algorithm does less well for some groups of customers, it may lead to poor business outcomes. The C-suite needs to have oversight of the robustness of the AI to ensure that it is not only performing to the financial benefit of the organisation, but upholding company values as well.

While data scientists can, and should, identify and flag these trade-offs, it is the C-Suite and related stakeholders who are ultimately accountable to customers and regulators  and who should balance AI decisions in light of broader business strategy.

So what are the key considerations business leaders should keep in mind?

Is the AI capability
too sensitive?

Artificial intelligence should support and enhance human decision-making — using data. However its performance can depend on the information available to it, and the sensitivity of its interpretation. Among other reasons, where a model is overfitted it may overreact to even small data changes, and this disproportionate reaction to minute input alterations can cause big problems.

Take for example an underwriting algorithm that assesses risk for customers. If it takes two similar people and fails to assign them similar credit scores  perhaps the applicants’ reported incomes differ by only a few thousand dollars per annum, but they are assigned wildly different scores  it would be considered oversensitive. Where an outcome of the algorithm is not reliably predictable, and instead gives very different outcomes there should be concern from the business.

There are a number of ways to manage model sensitivities, but companies should first identify where and when a model is over sensitive and determine what the ethical and business implications might be.

Tools exist that can test for these kinds of sensitivities. For example, PwC’s recently developed Responsible AI toolkit focuses on three key dimensions when it tests AI sensitivity:

  • Can the AI model return identical predictions when presented with identical data? This is a basic level of model robustness: can it be relied on to do what it is expected to do?
  • Does it produce the expected outcomes based on data with similar features?
  • What are the model’s sensitivities as determined by ‘what-if’ analyses?

Once the organisation has an overview of where any sensitivities lie and what form they may take, their impact, severity and costs can be determined and a remediation plan designed before the AI system goes into production.

Can the algorithms
be manipulated?

Scope for AI manipulation is just as concerning as oversensitivity. Where the AI system manages access to a resource, such as financial credit, it may be susceptible to being ‘gamed’ (manipulated in unintended ways for nefarious or unfair benefits). For instance, opportunists or criminals could exploit susceptibilities in the system working out the rules of play, entering the criteria the AI program is looking for and gaining access to funds.

Increasingly, organisations are using AI to detect financial crimes, such as insider trading.1 But criminals know this, and there is a risk that the business’ detection algorithms could be manipulated so that ongoing illegal activities occur under the detection threshold, and thus, remain effectively invisible.

One way to reveal an AI’s susceptibility to manipulation is to test combinations of data inputs using purpose-built algorithms which can fool the system into producing an unwanted result. For example, strategically attempting to obtain credit by exploiting underlying parameters and rules to identify weak points.

The resulting information will enable the business to work with data scientists and determine the likely impacts, mitigation measures to be implemented, and the cost to do so.

Is the AI system
secure?

An AI system is a critical part of the intellectual property of an organisation, but in many cases, its exposure is built in by design by way of customer interaction. Products and algorithms that vastly increase the efficiency and quality of service are also exposed via their use.2So how can organisations secure their IP, when the automated model must be shared? 

Not all artificial intelligence IP is easily stolen, but determining whether it is easy or difficult to appropriate can be as simple as using security comparison software.

Your business,
your decision

Robustness and security in AI involves a multifaceted trade-off with other aspects of responsible AI: interpretability and explainability; bias and fairness; ethics and regulation; and governance. While there are a number of technical solutions available to identify potential robustness or security issues, ultimately the final implementation should be a business-led decision, not just a technical one.

---

This article is part of PwC’s Responsible AI initiative. Visit the Responsible AI website for further information on PwC’s comprehensive AI frameworks and toolkits – including tools for assessing AI sensitivity, algorithm manipulability and system security. You can also access the free PwC Responsible AI Diagnostic Survey.


Join PwC and the entire AI community at this year's World Summit AI in Amsterdam. View the full programme, speaker line up and book tickets by visiting worldummit.ai


World Summit AI 2019
October 9th-10th
Amsterdam, Netherlands
worldsummit.ai

 

Topics: AI, Strategy, Leadership