Responsible AI: How PwC and Centrica are turning governance into an AI accelerator

People in meeting

To drive innovation and adoption of responsible AI, one of the UK's largest utility companies turned to governance as an accelerator.

Centrica logo

Industry

Energy, Utilities & Resources

Our role

Governance and assurance

Featuring

GenAI upskilling

“AI Governance shouldn’t be misunderstood as putting the brakes on innovation. It’s an accelerator,” says Leigh Bates, Global Risk AI Leader, PwC UK.

“By embedding effective and proportionate AI governance, organisations can adopt responsibly, earn stakeholder confidence, and unlock AI’s transformative value much more quickly and sustainably”.

That’s why the team at Centrica came to PwC. With over 200 different Artificial Intelligence (AI) use cases identified across its subsidiaries, such as improving demand forecasting and streamlining customer care, the energy services company needed to evolve its governance around AI use. This was aimed at safely benefiting from the capabilities of generative and agentic AI, without slowing the pace of innovation.

As a technology-driven company, Centrica has a strong focus on innovation. With responsibility to customers front of mind, they created a Responsible AI Framework with clear guardrails to guide how they design, build and deploy AI. This prioritises safety, fairness and transparency, ensuring a human is always in the loop, and sustainability is built into their products and services.

“Our clients are often worried about the risk exposure of deploying Generative AI into a production environment at scale,” says Bates. And that sentiment is echoed by PwC research, showing one quarter of business leaders think there’s a high-level of risk attached to technology investments.

Identifying blind spots

Centrica’s focus was on proactively striking the right balance between capturing commercial value from AI and appropriately managing risks to data, reputation and compliance. This is crucial as the EU AI Act carries potential penalties for non-compliance of up to €35m, or seven per cent of global annual turnover. Centrica developed a Responsible AI Framework aligned with the company’s values, code and sustainability goals, engaging PwC to provide independent assurance, stress testing and guidance on how to implement the new framework internally.

Teams from across the business, covering governance and compliance, data management, cyber security and technology and product development were brought together through structured workshops, co-run by Ronnie Chung, Former Group Head of Responsible AI, and Nadia Manan, Principal Group Data Manager at Centrica, to provide feedback and put the framework to the test.

The process identified blind spots. “Risk hotspots, things such as the responsible use of open-source AI models, were important to uncover,” says Trish Ani, AI Senior Manager, PwC UK. “It meant the framework could address these risks, but in a proportionate way”.

Accelerating innovation

“Through our proven AI ‘trust by design’ approach, we built the foundations to make sure AI is ethical, sustainable, secure and explainable from the start,” says Bates. “We stress-tested systems, supported upskilling teams and created monitoring protocols to ensure ongoing performance and compliance.”

Leigh Bates
Partner, Global AI Trust Leader, PwC UK

The framework is designed to ensure AI governance is applied based on the risks involved for each project. “If you can be clear on what the lower-risk categories of AI use cases are, you don’t create unnecessary blockers,” he explains. “For example, using AI for internal operational processes typically poses a much lower risk than developing a customer-facing chatbot. It means you can categorise where it’s safer to go faster, rather than treating all AI projects in the same way”.

By creating a framework with different layers of risk built in, employees are empowered to innovate. As Ani explains, “scaling responsible AI relies heavily on the workforce”. Chung and Manan kickstarted AI skills workshops, targeting innovation sprints and executive education schemes alongside its existing digital literacy programme. Once the framework was in place, it was then tested with live use cases. This included things like predictive maintenance for equipment repair, all the way up to conversational AI support for contact centre agents.

Stress-testing

The Centrica team found that the responsible AI framework accelerated their adoption of AI by giving teams ownership and accountability. A clear example is how it enabled them to launch their LLM customer agent. As it is a free text virtual agent rather than one with pre-defined prompts and answers it brings a higher risk, so stress-testing was crucial.

“When deploying a chatbot into the real world, we have a responsibility to go beyond technical readiness. True due diligence means rigorously addressing issues like hallucination and bias, not as afterthoughts, but as critical safeguards to ensure trust, reliability, and fairness at scale,” says Ani.

As part of the testing process, the team conducted a “red teaming” exercise, creating personas and scenarios that could lead to unreliable responses, data leakage and inaccuracies. “You have to think about the different permutations of questions the chatbot could be asked, including the potential vulnerabilities it could be exposed to by someone trying to hack the system. We then worked alongside the Centrica team to create the right controls and standards to ensure data is kept safe,” she explains.

The results have been impressive. Responses to customer queries have been incredibly accurate, with the agent resolving a high number of queries without the need for escalation. This has boosted efficiency and customer satisfaction is high, with a CSAT score of 4.7 out of 5.

Centrica’s rapid progress has been facilitated by their clear vision from the outset.

“Responsible AI reflects our commitment to doing AI with intention, purpose and impact. Grounding our solutions in good governance, as an enabler, we ensure our solutions deliver trusted and meaningful outcomes for the business, our customers and colleagues.”

Nadia Manan
Principal Group Data Governance Manager

Keeping pace

Looking ahead, the business has multiple projects lined up to continue their innovation in this space, and safeguarding customers will remain a central edict throughout. Centrica plans to continue to evolve their approach as AI evolves, using the ‘trust by design’ model to keep pace with new AI developments.

That sentiment is echoed by Bates, “The right AI governance and controls must evolve in lockstep with technology innovation. With AI advancing at extraordinary speed, for example just think about AI agents and beyond, being prepared for change is essential for the long-term sustainable value from AI investments”.

Our contributors:

Ronnie Chung

Former Group Head of Responsible AI, Centrica

Nadia Manan

Principal Group Data Governance Manager, Centrica

Leigh Bates

Partner, Global AI Trust Leader, PwC UK

Trish Ani

AI and Modelling Senior Manager, PwC UK

Featured client stories

Hugh James

Industry: Legal
Theme: GenAI

Learn more

SSE

Industry: Aviation
Theme: GenAI

Learn more

Power and Utilities

Artificial intelligence

Contact us

Leigh Bates

Leigh Bates

Partner, Global AI Trust Leader, PwC United Kingdom

Tel: +44 (0)7711 562381

Trish Ani

Trish Ani

AI and Modelling Senior Manager, PwC United Kingdom

Tel: +44 (0)7818 636970

Follow us