In control of your AI

1. Ensure clarity over AI strategy

While there is a constant need to evaluate and adapt, it’s still vital to be clear about your direction of travel.

  • Are you ready for the disruption from AI?
  • Have you considered the societal and ethical implications?
  • What business outcomes are you looking to achieve (e.g. product customisation or back office efficiency)?

2. Transparency by design

Adoption of AI will be an emotive subject both within your organisation, with your customers and in society as well, so it’s important to consider how you will build stakeholder trust in the solution.

It’s important to build the controls framework into the solution up-front rather than being designed and applied once systems are developed and in operation. This includes a mechanism to monitor outcomes and compliance.

AI should be managed with the same discipline as any other technology enabled transformation.

3. Build your AI organisation in advance

There are many possible models for developing an organisation-wide AI capability ranging from a centre of excellence and a dedicated board member to a ‘develop-and-steer’ strategy. Whatever approach you adopt, it’s important to ensure cross-organisational communication, collaboration and centralised co-ordination of AI initiatives.

4. Build data management into AI

If data is the new IP, it’s important to put in place mechanisms to source, cleanse and control key data inputs and ensure data and AI management are integrated.

If information is power, then AI is its zenith. Yet like all power it needs to be applied with insight, sensitivity and responsibility.

5. Integrate Assurance into your AI operating model

Assurance over AI isn’t a one-off. You should assess the risks and opportunities as your AI platforms evolve.

Assurance over AI also involves more than just embedding new technology into operational processes. It requires business-wide evaluation to gauge outcomes, identify emerging risks and look out for opportunities.

Drawing on our wide-ranging research and work with clients, our responsible AI framework (Figure) is designed to provide transparency over the viability of the AI implementation project and confidence that the controls are in place to ensure that the business outcomes meet expectations.

The PwC Responsible AI Framework

Contact us

Chris Oxborough
Partner, Technology Risk (Emerging and Disruptive Technology)
Tel: +44 (0)207 212 4195
Email

Laurence Egglesfield
Director – Technology Risk (Emerging and Disruptive Technology)
Email

Euan Cameron
UK Artificial Intelligence Leader
Tel: +44 (0)20 7804 3554
Email

Rob McCargow
Programme Leader - Artificial Intelligence, Technology & Investment
Tel: +44 (0)207 213 3273
Email

Follow us