Video transcript: AI assurance and live testing

Playback of this video is not currently available

3:52

Transcript

Hugo Rousseau: The Department for Science, Innovation and Technology published its Trusted Third-Party AI Assurance Roadmap on 3rd September. This roadmap fits within the Government's AI Opportunities Action Plan, released earlier this year which highlighted the role of the AI assurance ecosystem to increase trust and adoption.

The Government’s next steps include convening a UK consortium to develop the building blocks of an AI assurance profession - starting with a voluntary professional code of ethics, plus a skills and competencies framework. It will also work on mapping information access requirements for AI assurance providers.

The Government is also establishing the AI Assurance Innovation Fund. This initiative will issue £11 million funding to support the development of innovative assurance mechanisms.

This is aligned with the government’s approach of not introducing major new AI legislation at this stage, and to focus on providing ways to measure, evaluate and communicate the trustworthiness of AI systems to increase confidence, support adoption and economic growth. While there is nothing in this approach that is specific to financial services, it remains highly relevant for the sector, where assurance over AI systems is already a key area of interest for the regulators both in the UK and globally.

The FCA published a Feedback Statement summarising responses to its Engagement Paper on AI Live Testing on the 9th of September. AI Live Testing is a new initiative which will provide selected firms with regulatory and technical dialogue for market-ready AI products and applications. Respondents broadly welcomed this new initiative, highlighting the value of real-world insights to move beyond pilots and increase ‘regulatory certainty’.

Respondents also highlighted several challenges slowing down innovation, including the complexity of explainability, data quality, third-party risk management, as well as testing and validation methods. These themes closely align with the issues we are seeing firms grapple with in practice, particularly around ensuring robust governance and demonstrating AI outcomes in a transparent and reliable way.

Respondents also made recommendations to the FCA. This includes:

  • Developing standardised AI performance benchmarks
  • Clarifying regulatory expectations for bias mitigation and fairness
  • Integrating AI Live Testing with international efforts such as the US NIST AI framework or Singapore’s AI Verify to support global interoperability. PwC has been directly involved in AI Verify, including in testing a genAI application developed by a global bank, as part of international efforts to build robust governance and assurance approaches for AI systems in financial services.

The FCA has not made a call yet on whether it will take forward any of the recommendations. The regulator will focus first on pressing ahead with AI Live Testing, with the first cohort due to launch in October. Two cohorts are planned, each comprising around five to ten participating firms.

Beyond considering participating in AI Live Testing and the FCA’s AI Lab, firms should als o ensure their AI development and deployment align with existing regulatory expectations. Examples include the Senior Managers Regime and the Consumer Duty, which the FCA highlighted in its article published on 9 September.

Follow us