Series 3 Episode 7: Reaching a tipping point on AI regulation

In this episode, host Andrew Strange discusses the evolving regulatory approach to artificial intelligence (AI), alongside PwC guests Fabrice Ciais, Director of AI, and Belinda Baber, a Manager in the FS Regulatory Insights team. With the AI Public Private Forum (AIPPF) recently publishing its final report, the regulatory direction on AI is becoming clearer. Our expert guests discuss how FS firms are using AI and how that’s likely to evolve; the actions firms should take following the AIPPF report; the challenges firms face on topics including governance and explainability, and how they can overcome them.

Listen on: iTunes  Spotify

Transcript

Andrew Strange:

Hi everyone, and welcome back to Risk and Regulation Rundown Podcast. I'm Andrew Strange, your regular host and in today's episode we're talking about the evolving regulatory approach to artificial intelligence. Joining me for today's discussion are Fabrice Ciais, a director of AI here at PwC, and Belinda Baber, a manager in my regulatory insights team. Now Belinda is actually joining us remotely, so apologies if the sound quality isn't perfect, but I'm sure her contributions will more than make up for it. No pressure Belinda. Now AI has been on the regulator's radars for a good few years now in the UK and globally, but we still don't really have a regulatory framework for the use of AI. Firms have been so far operating under a lack of clarity, as they develop AI models and use cases, but some greater clarity is now emerging. The artificial intelligence public private forum, which was convened by the Bank of England, the FCA, made up of industry and other stakeholders published its final report in February. This gives the industry something has been waiting for some time. While it's not regulatory guidance, it does give us a real clear sense of regulatory direction. Before we get into the detail of what the report covers and what it means for firms. Fabrice, why don't you just start by telling us what you're seeing in the market at the moment in terms of how firms in financial services are actually using AI?

Fabrice Ciais:

Hi, Andrew, nice to be here today. We actually see a lot of activity around AI, and in a broad sense, AI is the use of advanced statistical and modelling techniques with large computational and data needs. We see the use of AI in a range of use cases in front office, back office, middle office. For example, the use of natural language processing techniques to process a lot of texts and read through contracts to check the compliance of contracts. For instance, the use of computer vision to check the conformity of signature or deal with reconciliation of invoices. We see investment banks using machine learning at scale to check the quality of the data, the reports every day to regulators. We see the use of AI in the front office as well for hedge fund manager to get new insight from a lot of open data and analysis reports to inform their investment or their trades. We see AI as well being used for to know your customer better, identify fraud or in the world of insurance to expedite claims as well. There is a range of application across the board.

Andrew:

Feels like it's pretty much every firm across FS, in that case, must be using it in some way.

Fabrice:

Yeah, it's a lot and the Bank of England was showing us in a survey before the pandemic that 60% of the bank were having one AI use case at least in production use at scale. Our own research was showing during COVID, the acceleration of the investment and the acceleration of the AI footprint. Yes, this is pretty much really here to stay and to scale up.

Andrew:

Great, thank you. Given that industry context, Belinda, let's turn to you, where are the UK regulators at in terms of developing their approach, were there some key regulatory messages that came out of the AIPPF report, that’s a terrible acronym as well, I really need to work on that.

Belinda Baber:

That's a real tongue twister. Hi, Andrew and Fabrice. The AIPPF was formed back in October 2020 and made up of regulators, technology specialists and experts from the financial sector. The aim was to share information, deepen their collective understanding of the technology and explore how the regulator can support the safe adoption of AI in financial services. Given the very nature of an AI system, its very purpose is to process large volumes of unstructured and alternative datasets as Fabrice was mentioning. This poses unique risk and presents new challenges to banks, their customers and the financial system as a whole. It's really important to have that multifaceted forum. The final report was a result of multiple workshops and discussions, which released back in February as Andrew mentioned earlier, this is something industry had been waiting for some time. While it's not regulatory guidance it does give a much clearer sense of regulatory direction. The regulators have taken a constructive approach. They have been very collaborative and it's clear they are keen to keep the industry involved with the journey and do not want to hamper innovation. We have been speaking to the Bank of England, the FinTech hub about this, and they're very keen to move the agenda forward with the expectation of publishing a discussion paper in early summer, where they hope to continue their collaborative efforts beyond industry experts. There's been a lot of discussion over the years, but we're now reaching a bit of a tipping point with the regulators getting more specific. The report explores some of the challenges, risks and opportunities associated with the development of AI and makes a number of recommendations to firms and the regulators. The key findings of the groups next steps on the regulatory agenda on AI focuses on some themes that won't come as a surprise to those of you working in AI or actually in the financial sector as a whole. Firstly, they're thinking about data, how any AI models start, and moving on to model risk and then overlaying that with a fit for purpose governance framework. The report also comments on bias, fairness, ethics and transparency, so very much in line with global trends of regulators that we're seeing. Although the report is not official guidance, it is a big step forward. It's material for firms to utilise in their reviews of their AI systems.

Andrew:

I can certainly see from an innovation and competitiveness perspective why the UK regulator would really want to be engaged in this I get that. Clearly, we don't operate in a bubble. What's the regulatory approach we're seeing outside the UK? What's happening in other jurisdictions, Fabrice have you got any views?

Fabrice:

Yeah, and absolutely, this is a network and all regulators have been very busy over the last few years on this agenda. I will perhaps mention Singapore and (MAS) the monetary authority in Singapore. They have not only been very active in the world of AI, but they have created a real consortium between the regulator, bank, the FinTech ecosystem, and the wider technology space as well to not only think about guidance, but also think about very practical ways of working, so case studies around how you should govern AI in the context of credit risk, in the context of marketing, how you can favour the development of new tools to identify buyers or really cover the challenges bedding dimension around transparency and explainability. They are going in a very practical manner on bringing solutions to bear across the industry. We see a lot of activity in Hong Kong, in Japan as well, a big RFI going on so, request for information across all financial services regulator in the US. This is a very active field, and this is a network as well, regulator talk and exchange between each other and standard setters as well like IOSCO started to bring some consistency of messages across the board.

Belinda:

Yes, and also, we're seeing an interesting step in a different direction within the EU and Chinese approach. From the EU, it's interesting as it seems to be more rules-based approach reflecting the need to drive harmonisation across the member state on a cross sectorial basis. In April 2021, the European Commission published an AI regulation, which mandates a risk-based approach and will bring a much greater focus on AI processes itself. Thinking about ex-ante certification for high-risk use cases, and where we are also seeing common prescriptive approaches in China with the issuance of the draft AI regulation by the People's Bank of China. The AI regulation there aligns China with the EU in its rules-based approach and it makes an interesting landscape to watch closely for the divergence from the global trend. The Chinese regulator has also issued specific ethical standards focusing around improving human wellbeing, promoting fairness and justice, protecting privacy and security, ensuring controllability and credibility, strengthening responsibility and improving ethical literacy. The practical developments of both of these jurisdictions propose legislation needs to be watched closely. The regulatory divergence is something that global firms need to have on their radar with a strong horizon scanning capability to keep on top of the evolving environment.

Andrew:

It sounds like that regulatory approach is evolving and actually I'm drawn back to, while I'm going to say crises, but I'm not saying that AI is a crisis; but if you think back to 2008, 2009, 2010 with the financial crisis, and we saw huge amounts of action by standard setters like IOSCO as you say and then filtering down through different jurisdictional regulators, and we all know just how much of an impact that had on regulation in financial services over the last decade. It feels like we're at a similar point but in the AI cycle here. Obviously, there's some degree of consistency across regulators, which is great. Clearly, there's going to be some divergence too, and there's always a risk that certain regulators view this as a competitive advantage and becomes a bit of a race to the bottom. I suppose it's a very fluid situation. In terms of how firms can deal with that, easy question, how do firms deal with a fluid situation? We've got the AIPPF report, what actually should firms be thinking about doing now? Belinda, I'll chuck that easy question to you.

Belinda:

The AIPPF report provides really good structure around data management, model risk management and overall AI governance with setting out use cases that can aid firms in their thinking. It's a really good starting point, it might not be as daunting as you think. If you have formed a multidisciplinary team to look at all the aspects of AI across the lifecycle. From a very technical aspects to the border ethical to consideration as well as having an AI Council in place, then you're probably in a really good place; but some of the examples of best practice that the report has set out as considering making a clear definition of AI, easier said than done; but an inventory of AI models and tools across your organisation, you have a really clear view there. That's also including third party AI models. Then also understanding the key risks arising from the use of AI in your organisation, how to identify that and mitigate them, and devise a clear responsible AI policy consistent with a regulatory development guidelines available, define owners and how ownership roles up to a senior level. Then also thinking about three lines of defence when it comes to the review and implementation of AI use cases. The really key one, I think, to get this right is the ability to stay agile as regulators become more specific and your firms may increase its use of AI, that's a way banks will be able to respond effectively to this challenge.

Andrew:

Thank you, Belinda. Yeah, so not as daunting as I thought, okay, it sounds really easy. It's almost on a par with operational resilience or financial resilience, similar AI resilience in terms of how you monitor it and identify, the list of AI activities that Fabrice started with implies it's probably quite a long list of stuff that firms have got to get their head around, but it's really interesting. Not daunting, Belinda says. Fabrice, what are some of the biggest challenges for firms and the areas where they've got questions for regulators and are still struggling with some of that uncertainty around regulatory expectations?

Fabrice:

Yeah, not daunting, but there's still some grey area and still a degree of challenge. The first question is really, what is AI and have a clear definition of AI across your organisation, enough clarity on how you capture AI, how you have the full and complete view of what your AI activities are and the level of risks that are involved? We see a few questions around the governance model, and to some degree, it's a little bit challenging where regulator don't give you a recipe but give you high level principle up to the firm to set the right governance structure in order to act within the right risk tolerance level in a way and respect the law and regulation in force. There is some degree of discretion around how the AI governance link to the senior manager regime and who should be owning AI across an organisation and the degree of ownership there.

Andrew:

It is not actually an SMF prescribed responsibility yet.

Fabrice:

Yes, but probably some clarity going forwards and engaging with the Bank of England and other regulators. The next discussion paper will probably cover this type of subject in a little bit more details. Also when you deal with third parties, so you could procure an AI system from a third party, you could have some services provided by a third party, which are AI backed, but you won't have access to the model, you want to know about it's very black box, and what the degree of ownership between the user of the AI techniques and the producer or the owner of the AI techniques, and making sure there is perhaps a clearer view on the roles and responsibilities and the ownership is probably where banks want a little bit more clarity. The third one will be around we talked a lot about explainability, we talked about a lot about bias management, how much explainability should you convey what's the right degree of explainability for certain use cases and other little bit more practical clarity sometimes from regulator of what the minimum acceptable in a way could lead to a more and easier attack throughout the banking industry.

Belinda:

Yeah, to add to that, firms need to really start thinking about how they will set up their responsible AI Council. If it's not already in place, they get the right people around the table so properly understand the use cases as well as getting the right accountability for the SMR roles as that becomes more clear across the industry. With the SMR role, because they've got to start thinking about the approach, whether it could be multiple figureheads or one person, often in a global banking environment, that multiple figurehead might be more appropriate because it's very hard to get the right skill set covering all those areas. But if you already have the AI Council, you're in a good place. You may just need to flesh out your overall model. The UK emerging regime is very outcomes focused, it's good for innovation, but firms need to become comfortable with their risk appetite.

Andrew:

We're hearing that across a whole range of regulatory issues at the moment, the outcomes pieces, it is great from a competitiveness or an innovation perspective, but actually it is really hard to deal with because our firms building in stuff that's in five years’ time, they're going to go back and say ‘oops, this wasn't right, the outcomes weren't right. But we've got five years’ worth of legacy issues to deal with, really interesting.’ We touched on it briefly, but there must be some questions around the governance and accountability of that third party risk piece. What do firms need to think about there?

Fabrice:

Yeah, that's very important that you link the use of AI with your current framework in terms of procurement of third party and roles and responsibility and risk with third parties. You need to really think about the due diligence you need to perform before selecting your third party, really working alongside your third party under your regulator; understand the bias that may be introduced by the use of AI, understand the explainability techniques that the third party can provide you when you will be using new AI. Understand how you will monitor the use of the third party AI in production, how your AI will behave in terms of performance, in terms of new bias as well. You need from the get go really get all of these criteria really bottomed out and measured and agreed with your third party. It is difficult because third party may don’t want to share their IP, their secret recipe, the data they use to train their AI, but there is a real and diligence process to follow. We see financial services firm really increasing their knowledge of AI within their procurement team, and very importantly, involving their specialists, their data scientists, their engineers throughout the due diligence process, so to make sure that when you have selected your third party and when you are ready to go, you have covered all the main area for your governance going forward.

Andrew:

I like the idea of special recipe that's slightly worrying. Thinking about the asset management space where I have some experience with my clients, and you think about things like the product rules, where firms who manufacture product need to gather data on where it ends up in terms of the market. There are rules that are telling people they need to do it and there's no special recipe and a number of firms still seem to struggle with this concept. Adding in a bit more baking, I don't know where they'll get to. It's interesting. It's nice to finish on something slightly more positive. We've talked a lot about the challenges, daunting they are or not. I'm presuming that AI can actually bring some benefits to firms and to customers too. What benefits can firms expect to experience from AI and how would we see the use of AI evolving in FS in the future? There's an open ended question for you Fabrice.

Fabrice:

Yeah, it's a great question. We see already a lot of benefits actually and AI being used to improve processes to be more efficient. I was mentioning natural language processing. I was mentioning financial crime, for instance. There is a lot going on that bring more efficiency in the process and make experience really slicker. Going forward, you mentioned, consumer and it's very important. With the use of AI, you can think about a much better experience. For example, AI can be used in contact centre. When you call and you have a complaint, we can record these calls and identify early warnings where you really explain something that is wrong, needs intervention straightaway, and you can really help triage and expedite, the request from all the complaints from consumer on a very dynamic and programmatic way. You deal with a complaint here and now and not wait days or weeks to address them. In insurance, actually computer vision can really help expedite claims straightaway, the experience is much more slicker and the resolution is much more quicker for customer. You will reduce fraud as well. AI can help introduce new product for impaired life for instance, the world of insurance can really tailor new product to people who wouldn't otherwise have access to some type of insurance. It's a real value proposition for the market and in the world of vulnerable customers, it is very important where you can really tailor new product to vulnerable customer or new customers. There is a wealth of new product, new activity, better experience that can be introduced with the use of AI.

Andrew:

Belinda, what do you think? It sounds like the AI there is partially a solution to some of the existing regulatory challenges interestingly, but I mean, do you have any further views?

Belinda:

Yeah, I agree Andrew. It's a really exciting space and I'm really keen to see it evolve for the vulnerable customer area particularly and providing that access to financial services that people often need. With AI models and their ability to automate certain tasks, it can just bring great benefits for households, firms and the economy as a whole. For example, consumers can potentially access lower cost and more tailored financial products and services as a result. This innovation can change the trade of risks and returns. It's something firms need to get the right balance for and to just touch on what you were saying before, we're not necessarily always going to get it right first time, but I think it's about that best at its basis and ensuring you've got that evolving and thinking around it.

Andrew:

No, I agree. Thank you. That's great. Well, thank you both for joining us. There's a lot to reflect on in terms of that discussion. I'm taken by both the breadth of existing use of AI in firms already across all FS firms, and I'm intrigued by the use of it in terms of it as a solution to other regulatory issues. We didn't touch on consumer duty but that point around access to product is equally valid there as well. As a regulatory landscape seems fascinating, and I'm a regulatory geek, so I'm allowed to say that, but you've got the UK doing stuff, we've got the EU and China doing stuff. We have MAS doing stuff in Singapore and so on. It feels like a really interesting evolving landscape that will develop over the next few years, probably at a pace, which will mean that firms are going to have a challenge to keep on top of it. But at the same time, makes it really interesting and fun for us to be involved in. Thank you both for the discussion. I'm pretty sure we're going to come back to this in a podcast in the next 12 months or so, maybe being in an entirely different world, but thank you. To our listeners, I hope you've enjoyed this conversation. Please do subscribe to future episodes and rate and review this series, so your colleagues can find it as well and we'll be back next month, probably focused on the FCA business plan. Thank you.

Follow us