
FCA sets out growth focused strategy
The FCA has set out its five-year strategy for 2025-30, setting out a vision to deepen trust, rebalance risk, support growth and improve lives.
Historically, both the FCA and the Bank of England (BoE) have applied a technology-agnostic approach to regulation. However, with the Government’s emphasis on growth and innovation, the regulators are adapting their approach in line with their objectives, including promoting international competitiveness and growth.
As AI becomes one of the key technologies firms are looking to scale, it’s crucial that firms understand and engage with this evolving approach. So what does the regulators’ pivot towards a tech-positive approach mean in practice for those looking to leverage it?
The UK financial services regulators have made clear they do not intend to introduce new rules specifically for AI, emphasising that existing regulatory frameworks already enable innovation while managing associated risks. This position was reiterated by FCA Chief Executive Nikhil Rathi in a letter to the Prime Minister, Chancellor, and Secretary of State in January 2025 and in the FCA’s 2025/26 annual work programme.
The FCA’s decision to leverage existing regulation is based on the broad coverage of current rules, industry feedback, and the nature of AI use in the sector:
Existing technology-agnostic rules apply to AI: AI already falls under multiple existing regulatory frameworks, including the Senior Managers Regime, Consumer Duty, and expectations related to model and third party risk management and operational resilience.
Feedback from industry: The industry - including through feedback to DP5/22 - has generally supported the FCA’s approach, favouring clarification on how current rules apply specifically to AI, rather than introducing new regulations.
AI use in the industry: An FCA and BoE survey published on 21 November 2024 found that 75% of firms already use AI, up from 58% in 2022. However, usage varies significantly in scale and impact, with 62% rating their AI use cases as low materiality, and 56% having ten or fewer AI applications.
Although the FCA intends to rely on existing frameworks, it continues to consider regulatory changes in light of the rapid growth and expanding use of AI, which may introduce new or heightened risks. Jessica Rusu, Chief Data, Information and Intelligence Officer, noted in her foreword to the FCA’s April 2025 AI update that advances in technology ‘may require modified approaches to firm risk management and governance’ and that ‘regulation will have to adapt as well’.
The BoE is aligned with the FCA’s regulatory approach and closely monitors AI developments to ensure that regulation remains effective in managing associated risks. In a letter to the Government in April 2024, Deputy Governors Sam Woods and Sarah Breeden noted that the BoE ‘may issue guidance or use other policy tools to clarify how the existing rules and relevant regulatory expectations apply’ if needed, to support firms’ understanding of the regulatory framework.
In the medium term, regulators may therefore consider making changes to regulation or issuing AI-specific guidance if required. Areas targeted for clarification could include the Senior Managers Regime (currently being reviewed with next steps expected to be announced soon), the Consumer Duty, data regulation as well as expectations related to testing, validation and explainability of AI models.
In the meantime, as part of its increasingly tech-positive approach, the FCA has launched new initiatives to support firms as they experiment with, develop, and deploy AI solutions.
“The FCA’s strategic shift toward a tech-positive approach is a timely and welcome development. As AI progresses from experimentation to scaled deployment, the regulator’s active engagement with industry will offer greater confidence to firms seeking to innovate responsibly.”
Leigh Bates
Partner, PwC United Kingdom
One of the clearest signals of the FCA’s tech-positive approach is the launch of targeted initiatives to help firms test and scale AI responsibly. These efforts sit under the FCA’s AI Lab, announced in October 2024:
AI Spotlight: A repository of practical AI applications, contributed to by PwC UK.
AI Input Zone: An online platform for industry insights on AI applications and adoption barriers.
Supercharged Sandbox - accelerating early-stage AI innovation: The FCA is enhancing the existing Digital Sandbox infrastructure, providing firms with access to Nvidia’s AI Enterprise software suite.
AI Live Testing - supporting deployment of market-ready applications: This initiative will provide firms with regulatory support and technical dialogue with the FCA for market-ready applications.
AI Live Testing, in particular, is a distinctive new service that captures the FCA’s approach to AI: engagement with firms to seek to provide greater regulatory certainty when deploying AI-powered products and solutions. AI Live Testing will enable structured dialogue on both regulatory and technical issues - particularly valuable for firms looking to scale AI or operate in higher-risk areas, such as consumer-facing applications.
Discussions may include output-driven validation methods, model robustness, and the mitigation of bias or unintended outcomes. Firms participating in the initiative may gain greater confidence in their models’ performance and ability to comply with existing regulations.
Internationally, other regulators such as the Monetary Authority of Singapore (MAS) have also taken proactive steps in this space. MAS’s Veritas initiative, for example, focuses on establishing principles and methodologies to promote fairness, ethics, accountability, and transparency in the use of AI and data analytics.
The FCA’s evolving approach to AI offers a tangible example of its tech-positive regulatory stance - prioritising proactive engagement and experimentation within existing, technology-agnostic frameworks. This approach is likely to influence how the FCA addresses other emerging technologies and related areas, setting a precedent for regulatory expectations.
Firms should consider how this direction might influence future regulatory engagement and developments, especially where innovation is encouraged within current rules. For example, the outcomes of the Advice Guidance Boundary Review and the Mortgage Rule Review could open up opportunities to apply AI to new use cases.
Continued developments in AI could also prompt the regulators to take more direct intervention to address emerging risks or protect market integrity. At the same time, the meaning of tech-positive may vary across different areas: in domains such as digital identity or Open Finance, regulators may move more swiftly to introduce guidance or regulatory change. Given the pace of technological advancement and an evolving regulatory landscape, firms must remain vigilant and agile to harness AI responsibly and adapt to change.
Assess readiness and engage with regulators: Evaluate suitability for participation in the Supercharged Sandbox or AI Live Testing. Provide regulatory feedback to help shape future policies.
Ensure AI deployment aligns with existing regulations: Align AI initiatives with existing frameworks, including the Consumer Duty, the Senior Managers Regime and model risk requirements.
Strengthen AI governance, testing and validation: Enhance governance structures for AI deployment, review internal policies, and establish robust testing and model validation processes.
Adopt iterative, agile AI deployment: Embrace agile principles by starting small, rapidly iterating, testing and continuously incorporating feedback.
The FCA has set out its five-year strategy for 2025-30, setting out a vision to deepen trust, rebalance risk, support growth and improve lives.
PwC’s analysis of the regulators’ update on artificial intelligence.
We examins the impact of evolving AI technology and corresponding regulatory responses on financial services firms.
Leigh Bates
UK FS Data Analytics Leader, London, PwC United Kingdom
Hugo Rousseau