Reflections

Scaling customer-facing AI: unlocking better outcomes and Consumer Duty compliance

Hero image
  • Insight
  • 18 minute read
  • December 2025

Financial services firms are increasing their investment in customer-facing artificial intelligence (AI) to streamline journeys, boost engagement, and guide customers towards better financial decisions. As these models move closer to the customer, firms must manage risks and demonstrate they’re meeting regulatory obligations around customer protection and outcomes. This becomes even more challenging as firms progress the deployment of agentic AI, which heightens and creates new risks around autonomous decision-making, explainability and control, and further tests firms’ operational resilience and security.

UK regulators have been clear they do not intend to introduce AI-specific rules, emphasising that existing regulatory frameworks already provide flexibility to innovate while managing risk, as we discussed in our recent article on AI assurance. The FCA’s communications, including its April 2025 AI Update, make clear that the Consumer Duty will be a primary lens for assessing customer-facing AI.

The interplay between AI and Consumer Duty expectations creates both opportunity and challenge. AI has the potential to improve customer support and outcomes, by enabling faster, more tailored and quality-assured interactions. At the same time, the lack of prescriptive guidance on how the Consumer Duty applies to AI-enabled journeys leaves firms navigating areas of real uncertainty. Questions around fairness, explainability and outcomes monitoring become more complex and pressing as AI (including generative AI and agentic AI) is deployed at greater scale, and as interactions become more dynamic and personalised.

This article sets out the key considerations and practical steps for firms to deploy AI in a way that aligns with Consumer Duty expectations, manages risk and unlocks the technology’s potential to improve customer outcomes.

“We believe our existing frameworks like the SM&CR and Consumer Duty give us enough regulatory bite that we don’t need to write new rules for AI.” 

Jessica Russu
FCA chief data, information and intellience officer, AI for growth speech

The rise of customer-facing AI

Firms across banking, insurance, asset and wealth management are looking to scale customer-facing AI. We’re seeing increased deployment across onboarding, customer service chatbots and AI-assisted agents, personalised communications and nudges (to help consumers make better decisions), complaints handling, and affordability and eligibility assessments. 

Customer-facing AI is becoming central to firms looking to take advantage of in-train regulatory initiatives, including targeted support, Open Finance-driven personalisation, and digital/non-advised mortgage channels. Participants in the FCA’s AI Live Testing programme are already exploring these use cases, with a second cohort opening soon (as the FCA announced on 3 December 2025).

As AI moves closer to the customer, firms must strengthen controls and put the Consumer Duty at the centre of AI design and oversight. 

This coincides with heightened FCA scrutiny of Consumer Duty, across areas including product design and consumer understanding (as highlighted in the FCA’s recently published Consumer Duty focus areas). Firms should therefore expect closer examination of how AI-enabled journeys deliver good customer outcomes.

Against this backdrop, firms need a clear framework for assessing how AI applications meet Consumer Duty expectations, and for using technology to support compliance. The next section sets out the key areas and practical steps to focus on.

Examples of customer-facing AI use cases currently being tested with the FCA

  • Streamlining complaints handling
  • Helping consumers make smarter spending and saving decisions
  • Supporting financial advice
  • Supporting debt resolution

Source: FCA AI Live Testing update, December 2025

Key considerations for deploying customer-facing AI

1. Designing services that meet customer needs

The Consumer Duty requires firms to ensure products and services meet the needs, characteristics and objectives of their target market. Firms should: 

  • Assess whether personalisation could unintentionally exclude or disadvantage particular groups, including vulnerable customers.

  • Require Design and Risk teams to evidence how AI-enabled journeys meet the needs of different customer groups before deployment.

  • Embed explicit triggers in AI journeys (e.g. low confidence, emotional stress, indicators of vulnerability) that automatically route customers to a human where AI-powered digital channels are unlikely to meet their needs.

2. Fairness and bias: managing discrimination risks

The FCA is clear that AI must not lead to discriminatory or unfair outcomes. Under the Duty, firms must act in good faith and avoid foreseeable harm - expectations that apply directly to AI model design, data choices and outputs. To  meet these expectations, firms should:

  • Assess input and training data for bias and representativeness with clear remediation where gaps or distortions are identified (to address the risk of ‘overfitting’ recently highlighted by the PRA in CRO roundtable sessions).

  • Conduct fairness testing on model outputs and outcomes across groups, ensuring any differences are subject to challenge and objectively justified.

3. Ensuring quality, accuracy and reliability of outputs

Inaccurate, misleading or inconsistent AI outputs can undermine consumer understanding and cause harm. To mitigate this, firms should: 

  • Establish quality thresholds and safety controls (such as hallucination detection) for AI-generated responses, and build guardrails to manage incorrect or inappropriate outputs. For example, by tying answers to verified information, and maintaining blocklists and regulatory keyword maps to prevent incorrect, inappropriate or non-compliant outputs.

  • Review AI outputs in live environments and introduce operational controls such as live monitoring, automatic retries, back-up options, and limits on how many requests can be handled at once - so the service continues to perform reliably, even during peaks in demand.

4. Explainability, transparency and consumer understanding

The FCA emphasises that AI systems should be appropriately transparent and explainable. Consumer Duty rules also require firms to equip customers to make effective, timely and properly informed decisions, and to ensure communications are likely to be understood. Meeting these expectations becomes more complex when journeys or decisions are dynamically generated by AI. Firms should:

  • Be transparent about how AI is used, clearly explaining why customers are seeing certain content or decisions, what the AI tool can and cannot do, and when human support is available.
  • Test and monitor AI-driven communications to ensure they: support consumer understanding, avoid exploiting behavioural biases, and identify where customers may be confused.

5. Outcomes monitoring

Monitoring customer outcomes is one of the most challenging aspects of Consumer Duty - and AI intensifies both the opportunity and the complexity. AI can enhance MI capabilities by analysing large datasets, and identifying emerging patterns of harm or vulnerabilities. But because AI drives highly individualised and dynamic journeys, firms need more sophisticated monitoring. Firms should:

  • Leverage AI to enhance customer outcomes MI, detecting emerging harms and patterns across high-volume interactions.

  • Develop a responsible AI (RAI) dashboard that monitors post-interaction analytics (e.g. drift, hallucination and complaint metrics) and KPIs across accuracy and reliability, risk and compliance, and Consumer Duty outcomes by customer cohort.

6. Governance and accountability

The FCA’s AI Update underscores the need for enhanced governance, robust testing, clear accountability and strong model validation - expectations that align with the Duty’s requirement to embed customer outcomes into firms’ culture and oversight. Firms should: 

  • Strengthen AI governance, testing and validation, ensuring models are well-controlled, documented and regularly reviewed.

  • Define clear SM&CR responsibilities for AI systems and decision-making processes, with senior leaders accountable for risks, controls and customer outcomes.

  • Use outcomes MI to inform governance, reviewing data regularly at senior forums and acting promptly when harm or risks emerge.

7. Technology as a catalyst for compliance

AI can materially strengthen firms’ ability to meet Consumer Duty expectations by enhancing monitoring, controls and customer understanding. Firms should consider opportunities to:

  • Test and validate journeys: Use simulation and synthetic personas - including those representing vulnerable customers - to assess whether journeys are clear, fair and navigable.

  • Use AI to translate monitoring data into prioritised, forward-looking insights for governance forums – triaging emerging issues, simulating the impact of interventions, and supporting timely, evidence-based decisions on remediation and product/journey changes.

  • Leverage the full control stack (human-in-the-loop protocols, safety and ethical controls, operational monitoring and RAI dashboard) to embed ‘compliance-by-design' across multiple customer-facing AI use cases. 

AI offers a significant opportunity to improve the quality, speed and relevance of customer support, while strengthening Consumer Duty compliance through better monitoring, richer MI and more consistent oversight. But the Duty sets a clear standard: firms must evidence that AI-enabled journeys deliver good outcomes in practice. This requires governance, fairness testing, transparency and outcomes monitoring to be built in from the start. 

As firms scale AI closer to the customer, now is the moment to embed Duty expectations into AI design and oversight, and to engage early with the FCA as it expands its AI Live Testing programme. Firms that do so can harness AI with confidence, delivering better experiences for customers and building greater regulatory trust.

Contact us

Leigh Bates

Partner, Global AI Trust Leader, London, PwC United Kingdom

+44 (0)7711 562381

Email

Sajedah Karim

Partner, PwC United Kingdom

+44 (0)7483 413622

Email

Tessa Norman

Senior Manager, PwC United Kingdom

+44 (0)7483 132856

Email

Hugo Rousseau

Senior Manager, PwC United Kingdom

+44 (0)7484 059376

Email

Follow us

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Hide