PwC's Responsible AI

AI you can trust

The potential offered by AI is exciting, but with it comes risk. If you’re implementing an AI solution then you need to have trust in its outputs.

Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance. 

PwC’s Responsible AI Toolkit

Our Responsible AI Toolkit is a suite of customisable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.

Our Responsible AI Toolkit addresses the five dimensions of responsible AI

Governance

Who is accountable for your AI system?

The foundation for responsible AI is an end-to-end enterprise governance framework. This focuses on the risks and controls along your organisation’s AI journey, from top to bottom

 

Interpretability & Explainability

How was that decision made?

An AI system that human users are unable to understand can lead to a “black box” effect, where organisations are limited in their ability to explain and defend business-critical decisions. Our Responsible AI approach can help. We provide services to help you explain both overall decision-making and also individual choices and predictions, tailored to the perspectives of different stakeholders.

Bias & Fairness

Is your AI unbiased? Is it fair?

An AI system that is exposed to inherent biases of a particular data source is at risk of making decisions that could lead to unfair outcomes for a particular individual or group. Fairness is a social construct with many different and—at times—conflicting definitions. Responsible AI helps your organisation to become more aware of bias, and take corrective action to help systems improve in their decision-making.

Robustness & Security

Will your AI behave as intended?

An AI system that does not demonstrate stability, and consistently meet performance requirements, is at increased risk of producing errors and making the wrong decisions. To help make your systems more robust, Responsible AI includes services to help you identify weaknesses in models, assess system safety and monitor long-term performance.

Ethics & Regulation

Is your AI legal and ethical?

Our Ethical AI Framework provides guidance and a practical approach to help your organisation with the development and governance of AI solutions that are ethical and moral. 

As part of this dimension, our framework includes a unique approach to contextualising ethical considerations for each bespoke AI solution, identifying and addressing ethical risks and applying ethical principles.

Euan Cameron

Euan Cameron

UK Artificial Intelligence Leader, PwC UK

Tel: +44 (0)20 7804 3554

Contact

Chris Oxborough

Chris Oxborough

Global Emerging Tech Risk Assurance Leader/Responsible AI co-lead, PwC UK

Tel: +44 (0)207 212 4195

Contact

Follow us

Required fields are marked with an asterisk(*)

By submitting your information, you acknowledge that we may send you business insights that we consider relevant to your interests. Please see our privacy statement for details of why and how we use personal data and your rights (including your right to object and to stop receiving marketing communications from us). To stop receiving marketing communications from us, click on the unsubscribe link in the relevant email received from us or send an email to uk_emailconsent@pwc.com.

Contact us

Euan Cameron

Euan Cameron

UK Artificial Intelligence and Drones Leader, PwC United Kingdom

Tel: +44 (0)7802 438423

Rob McCargow

Rob McCargow

Director, PwC United Kingdom

Tel: +44 (0)7841 567264

Hide