The Government’s AI Opportunities Action Plan and newly published AI assurance roadmap (the roadmap) set out an ambition for the UK to become a global leader in this field, positioning assurance as the key lever for trust, safety, and responsible deployment. Alongside this, the FCA has launched its AI Lab, including a new AI-powered sandbox and AI Live Testing, to support adoption and trial innovative use cases.
As AI becomes more embedded in core financial services activities - from fraud detection and credit underwriting to claims processing - firms will increasingly be expected to test, validate and monitor their models more rigorously. In this article, we explore the UK’s emerging approach to AI assurance, the lessons from Singapore’s AI Verify initiative, and the implications for financial services firms.
The roadmap emphasises professionalising AI assurance through a new consortium to lay the foundations of a future profession. This will include a voluntary code of ethics, a skills and competencies framework, and, in time, certification or registration schemes.
Alongside this, the roadmap introduces measures to strengthen sector capabilities: an £11m AI Assurance Innovation Fund to develop tools and testing methods; initiatives to map information-sharing between firms and assurance providers; and exploration of process certification and accreditation pathways. These efforts aim to ensure assurance services keep pace with fast-developing AI technologies and can independently verify their trustworthiness.
Challenges remain. A shortage of skilled professionals, the absence of widely accepted standards, and limited access to information all risk slowing progress. The roadmap also cites Singapore’s AI Verify Global Pilot as an example of international efforts to develop norms for testing generative AI applications, signalling the UK’s intent to align with and learn from global approaches in this area.
While not sector-specific, the roadmap’s emphasis on assurance as a driver of responsible adoption will affect the financial services sector, where assurance has become an international area of focus by financial authorities, and is also becoming an increasing focus for firms.
As noted in our article on regulators’ approach to AI published in June 2025, the FCA and BoE have confirmed they will rely on existing regulatory frameworks, rather than introduce AI-specific rules. To support this, the FCA has launched the AI Lab, providing a platform for industry experimentation through initiatives such as the Supercharged Sandbox and ‘AI Live Testing’. In particular, AI Live Testing enables firms to engage in structured dialogue with the FCA while deploying AI products and solutions.
In its feedback statement on the AI Live Testing engagement paper, published on 9 September 2025, the FCA noted that respondents broadly welcomed the initiative as a constructive step toward providing regulatory confidence and helping firms move beyond pilots into live deployment.
While the FCA and BoE AI survey from 2024 found that around 75% of respondents reported using AI, the vast majority of use cases were rated as low materiality, with only a limited number being semi-autonomous. Respondents highlighted a series of challenges that have constrained wider deployment, which the FCA has acknowledged and plans to explore with firms through the AI Lab. These include the complexity of explainability and testing, risks in high-impact consumer decisions, data quality, third-party risk management, and uncertainty regarding regulatory expectations.
The FCA also received recommendations from respondents to the engagement paper, including developing standardised performance benchmarks, advancing a more mature assurance framework, and considering closer alignment with international standards such as the National Institute of Standards and Technology’s AI Risk Management Framework and Singapore’s AI Verify.
While the FCA has not yet indicated whether it will adopt these recommendations, the regulators' initiatives and responses from industry highlight the importance of AI assurance to support responsible AI adoption.
“The UK’s AI assurance roadmap and the FCA’s AI Lab reflect the strong steps being taken to build trust in AI. Together with international initiatives like Singapore’s AI Verify, these developments show assurance is becoming central to responsible adoption worldwide - and firms that embed assurance early will strengthen trust and unlock greater value from their AI investments.”
Leigh Bates
Partner, PwC UK and Global Risk AI Leader
Singapore’s AI Verify initiative has served as an early international reference point for AI governance testing, providing a framework and toolkit for companies to assess systems against stated principles and claims. In February 2025, the AI Verify Foundation and Singapore’s Infocomm Media Development Authority launched the Global AI Assurance Pilot to codify emerging norms and best practices for the technical testing of real-world generative AI applications.
PwC was directly involved and acted as the independent tester of a generative AI solution developed by Standard Chartered Bank (SCB) for its relationship managers, with findings published in a report. SCB’s tool produces personalised draft client emails by combining client data, investment preferences, and market outlooks into a coherent message. PwC’s testing assessed risks such as hallucinations, robustness, completeness, and internal compliance - using a mix of synthetic test data, human subject matter expert review, and a large language model (LLM) ‘as a judge’. This last approach proved particularly effective in scaling evaluations across many outputs but also highlighted the practical challenges of developing representative test samples and translating compliance requirements into measurable criteria.
The pilot showed that effective assurance depends on focusing on the risks that matter in context, developing realistic and adversarial test sets with domain expertise and synthetic data, and ensuring human subject matter experts shape criteria and calibrate automated evaluations. It also highlighted the potential of using LLMs as judges to scale testing.
As Leigh Bates, Partner PwC UK and Global Risk AI Leader, noted in his contribution to the Global AI Assurance Pilot report published in May 2025, the use of LLMs as judges can approximate human judgment better than statistical methods and support ongoing monitoring. However, their effectiveness relies on careful prompt design and calibration, and they should supplement - not replace - human judgment in material use cases.
The UK’s AI assurance roadmap, combined with the FCA’s AI Lab and other international initiatives such as Singapore’s AI Verify, signals a clear direction: firms will increasingly be expected to demonstrate that their AI systems are explainable, testable, and trustworthy. Ultimately, assurance is set to become a defining capability for responsible AI adoption - and those who invest early will be best placed to lead the market and maximise the AI opportunity.
Map your AI use cases – identify where assurance is most critical, particularly in high-stakes or consumer-facing decisions.
Build cross-functional teams and establish robust governance – integrate compliance, technology, risk, and business expertise to design meaningful assurance processes.
Invest in explainability and testing capabilities – combine in-house expertise with third-party assurance where appropriate.
Engage proactively with regulators and pilots – participate in initiatives like the FCA’s AI Lab or AI Verify to shape standards and frameworks.
Hugo Rousseau