The responsible AI framework


Strategy

1. Aligning with your strategic goals

It’s vital to align AI innovation with core strategic objectives and performance indicators, rather than allowing a scattered series of initiatives to operate in isolation. In our experience, a lot of organisations have set various pilots in train. What most aren’t doing is taking a fundamental look at how AI could disrupt their particular business and then determining the threats and opportunities this presents.

2. Don’t expect magic

AI may be intelligent, but it’s still a machine. A common problem is believing the AI will magically learn without human intervention. In reality, you have to put a lot of effort into acquiring and cleansing data, labelling and training both machines and employees[3].

3. Clear about your partners

Everywhere you look, there are start-ups offering solutions to this and opportunities for that. Partnership with these vendors accelerates innovation, agility and speed to market. But it’s clearly important to pick your spot. This includes being clear about the strategic and operational priorities you’re looking to address through the choice of partner. It’s also important to bear in mind that while vendors may be good at selling the possibilities, they’re not always as clear about how to deliver them – the way they look at development risks is certainly very different from what you’re used to.

In a high risk and fast-moving vendor landscape, the first consideration is the financial viability of the potential partners – will they still be there when you need them? It’s also important to determine how to acquire the necessary data, develop the knowledge needed to deploy your new capabilities and how to integrate new platforms into existing infrastructure. When buying commercially available off-the-shelf software, a proof of concept development phase is often necessary.

4. Opening up to scrutiny

Before you adopt AI, you clearly need to know what it’s doing and how. This includes ensuring the software can communicate its decision making process in a way that can be understood and scrutinised by business teams. In particular relation to machine learning, it’s important to think about how to ensure the software will deliver the anticipated results. Boards want this assurance before they proceed. Regulators are also likely to expect it. Algorithmic transparency is part of the solution, though this may require a trade-off between decision making transparency, system performance and functional capabilities.

5. Demonstrating regulatory compliance

Regulators need to move quickly to keep pace with emerging technologies. We may see regulatory constraints that prevent adoption in key regulated industries such as health and financial services. Developments such as the EU’s General Data Protection Regulation (GDPR) are heightening the challenges. Staying compliant with relevant regulatory requirements is essential to build trust in your AI platform.

6. Organisational structure

The changes in your business models as part of your overall AI strategy will also need to be reflected in your organisational structure. Your organisation needs a dedicated AI governance structure, this could include a nominated member of the C-suite and a central hub of technical expertise. Embedding data scientists throughout your business either through training or hiring is essential to achieve AI organisational maturity.

Design

1. Opening up the black box

AI applications can communicate with customers and make important business decisions. But a lot of this is carried out within a black box, with the lack of transparency creating inherent reputational and financial risks. It’s important to ensure that the software is designed in a way that is as transparent and auditable as possible.

Proper governance and protection include the ability to monitor component systems. It would also include the ability to quickly detect, correct and, if not, shut down ‘rogue’ components without having to take down whole platforms. Related priorities include identifying dependencies and being able to make modifications with minimal disruption if regulations or some other aspect of the operating environment changes.

2. Creating a compelling user experience

Many AI applications deploy highly subjective user experience performance metrics akin to IQ, personality, and predictability. Even though the bulk of development may focus on the analytics, the success of the product will be determined by an emotional response. This subjectivity means that frequent feedback is required between product owners and developers to make sure evolving expectations and functionality are properly managed. Often it makes sense to bring in specialist user interface vendors or use your in-house digital team alongside the core analytics team.

AI may excel and often surpass humans at particular tasks or in certain subject domains, but is generally incapable of extending these skills or knowledge to other problems. This is not obvious to people who have to interact with AI, especially for the first time, and can cause frustration and confusion.

Branding and persona development (‘functionality framing’) are therefore key design considerations. Get it right and very basic software can appear human. Get it wrong and users will give up.

Some of the analysis performed by AI will inevitably be probabilistic based on incomplete information. It’s therefore important that you recognise the limitations and explain this to customers. Examples might include how you present recommendations on investments from robo-advisors.

It’s vital to align AI innovation with core strategic objectives and performance indicators, rather than allowing a scattered series of initiatives to operate in isolation.

3. Embedding the control framework

The most effective controls are built during the design and implementation phase, enabling you to catch issues before they become a problem and also identify opportunities for improvement.

An important question is who designs and monitors the controls? Both the breadth of application and the need to monitor outcomes requires engagement from across the organisation. Control design requires significant input from business domain experts. Specialist safety engineering input is likely to be required for physical applications.

A key part of implementation is breaking the controls down into layers (‘hierarchical approach’).

At a minimum, there would be a hard control layer setting out ‘red lines’ and what to do if they’re breached. Examples might include a maximum transaction value for a financial market trading algorithm. In more complex applications such as conversational agents, you could introduce a ‘behaviour inhibitor’ that overrides the core algorithm when there is a risk of errors such as regulatory violation or inappropriate language.

These core controls can be augmented by ‘challenger models’, which are used as a baseline to monitor the fitness and accuracy of the AI techniques or look for unwanted bias or deviations as the models learn from new data. Moreover, this approach can be integrated with continuous development to improve existing models or identify superior models for system upgrades.


Development

1. Rethinking programme management

Applying conventional planning, design and building to such data-dependent developments is destined to fail. Innovating and proving the concept through iterative development is needed to handle the complexity of the problems encountered and requires a high level of engagement from the product owners.

2. Managing data dependency

AI functionality is heavily data-dependent for machine learning model training and is likely to need a store of information known as a ‘knowledge base’. This often means initial design specifications and expectations are set beyond the limits of what can be supported by the data, no matter how ‘intelligent’ the software. A key requirement of data dependent projects is a discovery phase to outline data quantity, quality and the limits this places on the resulting models and functionality. This is one reason why AI software implementations require significant design iteration during the development phase.

3. Taking the time to test and train

For machine learning in particular, it’s important for the development team to apply best-practices to tuning and cross-validation methods to avoid overfitting and other common problems.

To get a clear picture of the use case and user experience programmes should it’s important to bring in input from beyond the software design team, who may inevitably be too close to the project to look at it objectively enough. The monitoring should include testing to correct for functional blind spots.

One way to augment testing and cut down on the risks is to pilot new AI-based applications on a small scale first and encourage a thorough review by analysts and non-technical users in a ‘business-as-usual’ context. Expert judgement and additional contextual information allow further validation, impact assessment and tuning before launching AI initiatives on a larger scale.

In many ways, AI software development can be more akin to video game development than automation or web development, especially when it interacts directly with humans. Testing should reflect this through intense user experience evaluation and phased ‘beta testing’ on unseen audiences prior to release.

Finally, AI development may require a number of attempts to get it right. It’s therefore important to ensure you have implemented the right level of programme assurance and quality controls, which can provide early indication of when data, technology or model training methods are not sufficient to support the business case.

4. Setting confidence thresholds

A balance between automation and human validation/verification is crucial. Defining the right thresholds and confidence levels at which to trigger human intervention can be challenging, however. Too cautious and the AI provides limited value. Too relaxed and the AI assumes more risk than can be contained. Continual monitoring of business performance is essential to confirm the technology is operating within the expected parameters.

Conversational AI agents engage in subjective communication with humans, so it’s important to ensure the confidence thresholds are set up to conform to social norms and user expectations.

Operating AI

1. Curbing unintentional bias

As more information becomes available and your model matures, it’s important to guard against unintended bias against particular groups. Transparency is vital to be able to spot these biases. For systems that learn through customer interactions, periodic functional monitoring, perhaps based on a set of standardised interactions, is recommended to catch any adverse ‘training drift’.

2. Guarding against attacks

Machine learning (especially deep learning) models can be duped by malicious inputs known as ‘adversarial attacks’. It is possible to find input data combinations that can trigger perverse outputs from machine learning models, in effect ‘hacking’ them. This can be mitigated by simulating adversarial attacks on your own models and retraining models to recognise such attacks. Specialist software can be developed to ‘immunise’ your models against such attacks. This should be considered in the design phase.

AI is only as effective as the data it learns from.

3. Recognising the role of data as your key intellectual property

AI is only as effective as the data it learns from. Maintaining high quality data and continuously evaluating the effectiveness of the model will be key to a successful AI platform. As data and technology applications move on to the cloud, commercial advantage will be driven by the magnitude and scale of the ‘IP’ you hold.

Partnership with a vendor may inevitably involve data exchange – i.e. intentionally or unintentionally passing on valuable IP. It’s therefore important to understand the value of the data you’re sharing and closely monitor and manage its supply and use.

4. Looking out for systemic risks

The flash crash that hit financial markets in 2010 demonstrates what can happen when AI interact happen when multiple AIs interacts in unintended ways and this isn’t sufficiently monitored. Safeguards should include scenario planning, understanding of your own vulnerabilities and how to respond quickly.

[1] ‘PwC’s Dr Anand S Rao explores the ‘Five myths and facts about artificial intelligence’ in Predictive Analytics and Futurism, Society of Actuaries, December 2016

Contact us

Chris Oxborough

Chris Oxborough

Lead for Responsible AI, PwC United Kingdom

Tel: +44 (0)7711 473199

Euan Cameron

Euan Cameron

UK Artificial Intelligence and Drones Leader, PwC United Kingdom

Tel: +44 (0)7802 438423

Rob McCargow

Rob McCargow

Director, PwC United Kingdom

Tel: +44 (0)7841 567264

Follow us