The Fraud Cast: AI Transformation and Investigations

Video 05/05/26

AI Transformation and Investigations

26:09
More tools
  • Closed captions
  • Transcript
  • Full screen
  • Share
  • Closed captions

Playback of this video is not currently available

Transcript

Transcript

Fran Marwood: So hello and thank you all for joining us again for the 11th episode of our Fraud Cast series.

My name's Fran Marwood and I'm a partner in PwC Cyber and Forensics practice and I'm delighted to be joined on our Fraud Cast today by a panel of experts and together we'll be discussing AI.

It's a much discussed topic and in particular how AI is transforming both compliance and investigations, but also why AI is creating a new type of fraud and conduct risk that none of us have seen before.

So that's the topic, part of the AI Trust agenda, that we're going to be unwrapping in a little more detail today.

And to help us do that, I'm delighted to introduce our panel members today.

I'm joined by Felicity Copeland and Neil Houston, and I'll let the panel introduce themselves.

Felicity, I'll come to you first.

Felicity Copeland: Hi. Thanks, Fran. So by way of introduction, I'm a director in our Risk Advisory practice. I primarily focus on trust and AI, but also responsible adoption.

A bit of background about me, I actually started off as an auditor, so I have a very hefty governance side to me, but I've also done a lot of tech development as well.

Fran: Great. Thanks Felicity, and Neil?

Neil Houston: Hi, Fran, delighted to be here. So I lead on AI resilience at PwC, part of our Forensic practice. My background is all around technology enabled investigations from kind of large scale fraud, corruption, but more recently looking at how AI transforms that fraud risk, as you said, and the challenges around investigating it.

Fran: So thanks Neil, and thanks both for those introductions. Let's start with the basics.

So to give some context, maybe I’ll turn to you first, Neil, what do you mean by AI in the context of fraud prevention?

Neil: I guess a key point here that we're talking about AI in its broader sense and it's not just around chat bots or generative AI.

You know AI has been around for a very long time in passing recognition kind of machine learning, et cetera and has been in the banking sector especially around kind of fraud prevention, screening and monitoring of high risk transactions.

And I think one of the things we're talking about here is actually the value that organisations get from the use of AI and that how it can enable humans to perhaps perform more tasks quicker, perhaps more accurately than they would otherwise been able to do.

Fran: No thanks, Neil And Felicity, any builds on that from your perspective?

Felicity: I mean, what I would just add to Neil. And as Neil said, it's been around for years. And what we're seeing in this technology now is it's not just about the tech itself. It's about how you embed it and how you augment it within your organisations to really drive value.

Fran: No, thank you. And I wondered if you could just maybe build a little, spend a couple of minutes teeing up what we mean by the trust agenda in AI.

Felicity: Absolutely. So trust and AI and it's, you know, it's a concept that's becoming more and more popular as people realise how badly things can actually go wrong. So what I mean by trust and AI is basically using the AI in your organisation confidently and allowing it to sort of help you scale the value driven by AI. When we think about trust and AI, we kind of break it down into 6 main areas. I won't go into all of them, but from an AI governance perspective, it's all around having the right level of governance around your, your AI. And what I mean by that is, you know, governance frameworks, life cycle frameworks, transparency, explainability, oversight, the right level of testing and ownership, clear ownership of AI.

Fran: Thanks for that, Felicity and, and Neil, maybe we move on now to AI enabled fraud and perhaps we can start with the positive side of the topic.So where are we seeing AI transforming investigations currently?

Neil: Thanks, Fran. So I think one of the challenges that investigations have traditionally had is just a large volume of complex data that exists across them. And now often in transactional data, you're trying to discover those patterns. So what AI is enabling people to do is perhaps prioritise what they're seeing, what they're surfacing. But at the same time, we're also seeing a real transformation of rhythm of the generative AI technology transforming how traditional E-discovery matters get performed.

So traditionally the first level of review of relevant document or documents for relevancy would be conducted by a human. What we're increasingly seeing is that that first level is actually being conducted through the use of AI, including at PwC, where we use Relativity aiR that really transforms how human judgement can be deployed at scale. And I think the big change really though Fran is the pace at which investigations are able to go at now, you know, and that really adds lots and lots of value, but also there are kind of challenges with that.

Fran: Yeah.

Felicity: And what I would say, just linking in the trust and AI piece, it's all about how you, you scale at speed without having a major sort of incident that kind of hinders your progress on your AI journey or reputationally, financially, et cetera.

Fran: No, thanks for that. And that's, that's a bit about the positive side. Maybe staying with you, Neil, as AI becomes more embedded in, you know, day-to-day business processes, I wonder how you're seeing that changing fraud risk.

Neil: Yeah, thanks. So I think, you know, there's kind of three broad risks that I would really think of. The first I think is one that everyone will probably be able to, to grasp, which is the use of AI to make Fraud feel more realistic through the use of synthetic media to be able to generate things that to the human look real.

And, I guess I've got two examples on that. So at one side you've got kind of consumer businesses who are seeing customers present images or perhaps problems with products or an issue where actually the image itself has been generated through generative AI to perhaps, you know, distort the true perception of reality. And, this is quite easy for consumers to be able to do. This technology has been run for a very long time around being able to manipulate images, but actually the realistic nature of them is actually really, really hard.

At the other end of the scale, you're also able to use AI to generate synthetic media to impersonate people. On one hand, that can be kind of fun and humorous, but at the same time, you know, you can impersonate a CEO through audio or video deepfakes. And I, I guess you know that technology is becoming even kind of quicker for organisations to be able to deploy. But at the same time, one of the challenges they've got is that bad actors are using it to defraud organisations to try to extract money from them. And we are seeing clients impacted by this and the sums of money are quite large. So that's one sort of thing that everyone would be able to really understand.

I think the second bit in my mind is actually how organisations themselves are using AI as part of that prevention or detection side. But that in itself can bring risk because AI is perhaps choosing around what is it going to surface up? What is it going to perhaps prioritise to you? And at the same time, perhaps what is it not going to show. So actually there is this kind of distal nature where AI that becomes the enabler because it's allowing you to surface up and explore the transactions at the same time becomes a risk because it may not actually decide to surface something up. So like Flick said earlier, that bit around testing becomes really important. Like do you really know what these systems are doing? Are you able to explain them? Are you able to observe what they're doing and that you can't just put them in and just kind of hope for the best?

And I think the third risk, which again might be a bit harder for people to fully understand, but it's where AI itself can do things in the organisation that might give it a benefit. So if you think of the ECCTA side of things, around Failure to Prevent Frayd, you know, what happens if the business has made a particular decision through AI and it's had some kind of benefit to the decision for the organisation, but perhaps the organisation can't understand why it has done something. Again, we're not able to kind of respond if under scrutiny that AI enabled kind of outcome gets challenged by a customer, a regulator or court. And I think that's also a really important point.

So for me, it's kind of three fold. It's how fraud has been committed, it's how we're detecting it. And it's also how kind of the AI outcomes get judged under scrutiny and how organisations can kind of defend against those. And I think it's, it's important to say in our work, we've seen all examples of all three of those.

Fran: No, that's great, Neil, thank you for that. And you know it, all three of those areas are areas that we're seeing in, you know, day-to-day practise, you know, with the investigations, the work that we do. And I just wondered whether you could give some insights on which of those three areas you think businesses are least prepared for.

Neil: I think in my mind, it's the third area, so where AI has enabled an outcome and the organisation has to respond or explain themselves, right, why something has happened. But on the surface that sounds really simple. But as AI becomes kind of more embedded in the enterprise, it may become quite hard to kind of unpick around why has something actually happened and that kind of explain ability that ownership of that decision, you know, who is who's ultimately accountable for it. And I think the big risk for me is that AI is becoming embedded in the organisation.In time, people will perhaps not feel that they're interacting with AI. They're just interacting with systems that behind the scenes have AI in them. And they may not be as explainable for them to unpick as they might have been able to do in the past.

Fran: Yeah, thanks. Thanks for that.

Felicity: Yeah, no, I completely agree. As I said, I was an auditor when I started off in my career. So I love to be able to take a bash through the evidence and explainability is really key. And what we're seeing with Gen AI in particular, explainability can be quite difficult to actually achieve, if not almost impossible, due to the non-deterministic nature of the models et cetera. So, you know, that's something that in my team, the Trust and AI team, we're constantly looking at how we can assure these tools or give assurance over the frameworks that govern them.The ownership piece is really, really important. And I think that's something that's been debated even in the courts at the moment, for example, you know, as the who's responsible when AI goes wrong.

Fran: I think that's a that's a nice segue really. So, you know, let's unpick the use of AI a little bit more, you know, staying with you Felicity, Lots of organisations are now, you know, relying on AI as a key part of their fraud and compliance response. And much more broadly, obviously, as you've sort of touched on there, what do you think needs to change around the governance of those AI based controls?

Felicity: Yeah, I mean, absolutely.So I mean, I obviously see AI governance is absolutely, you know, critical to any AI journey, as I call it.The organisations across sectors that I talk to are at varying different stages of their governance journey, so to speak. But what we try and talk to them about is this concept of trust by design, trust and AI.

And that you need to build those governance frameworks from the very inception or wherever you are in your AI journey, be it, you know, you're, you're giving your workforce access to a Gen AI tool such as Copilot, et cetera, or you're organising your latest to genetic workflow. You need to have the right level of governance that sits around that. And as I said, organisations are approaching this differently, they're at different points.

What we tend to see is that upfront organisations are doing great work around the proof of concept stage.You know, what does the build look like? The beginning of what I call the life cycle and then it kind of moves into production and that's when I see a quite a clear different shape between organisations that really embrace governance and those that aren't necessarily. Those that are seeing it more as a compliance piece and it talks to exactly what you're saying.You know, once it's gone into production, you really need to be able to then continue to monitor it, continue to test it, continue to check that it's working as you would expect it and it's making the decisions that you would expect it to be making. You know, I think you, you can't just focus on the outcomes you need to have. You need to understand what those serious, I think I mentioned it in the introduction, what the serious consequences are of that AI failing it once it's gone into production. And, and you can't just leave it to kind of run on its own, which I think is, is quite a big shift in mentality for tech development generally. You know, once it was done once and you tested it for that release, it was, it was good to go. But actually, you know, this, this, you know, model drift, for example, can come in, you can start getting answers that are not in line with what you'd expect or what you want.

Fran: Well, thank you for that. And let's stick with that governance statement. And Neil, I mean, from an investigations perspective, what do you think that means when something goes wrong from a governance perspective and AI has been part of the problem?

Neil: Yeah, thanks for a really, really big question, right. And at the heart of it, on most investigations or classical investigations that I might call them, you might go, you know, investigate people, you might go look at their emails, you might go look at kind of who have they met, who they've been chatting to, et cetera, and building up a census to kind of all of the evidence that you've got around a particular matter. So the challenge really with AI is that you've got another data source that you have to think of.

But as Felicity said earlier, it's quite complex, especially in the world of genetic AI, the non-deterministic nature, as mentioned. So that will mean it's can be and for some system be really hard to actually reproduce or understand why a particular thing has happened. But it doesn't mean that you can't still perform an investigation. It's just that you have to look at it from a different perspective. So in the same way that we might want to understand kind of who, what, when, where we can still understand about, OK, well, how was the system able to be in the enterprise? What were some of the inputs to it? What were some of the outputs in some of those black box S systems? We may not entirely know why something's happened inside the system, but we're able to observe what has happened from the interactions of it and then the outputs of it.

So, you know, for me it's another set of data sources, but that in itself is not new or being, you know, performing forensic investigations for over 20 years. So the world has changed about the data sources that we need to look at and we always have to, you know, catch up to stay relevant to the new data sources out there. I think what becomes really challenging though is what does it mean from the type of issue that we might have to investigate.

So, you know, I've seen recently an organisation use AI to help generate financial journal postings that got into the general Ledger. When challenged, they weren't able to understand exactly the substance of that journal. That in itself became challenging. So they had to actually go back to first principles to really understand, OK, well what was the journal, what made it up, what was the support? Because once they had the journal in the system, they didn't have any of that traceability, that explainability, that kind of provenance to really stand behind it.

So for me, really it's more around how can organisations explain what's happened and to what degree can they explain and what and what confidence level can explain. And I think that becomes a bit challenging. I think sometimes people get fascinated by understanding about why a particular things happen inside the box, but actually in the world of, you know, a governance failure, you need to look at the whole system and not just the box itself.

Fran: Now, thanks for that, Neil.That's, that's really helpful. And I've certainly seen that in, you know, recent cases that I've been involved in where, you know, we've seen revenue recognition containing errors that have been driven by AI. And so as you say, it becomes, can we explain how it worked and where it went wrong exactly? And I think that's going to be the real challenge that's upon us all at the moment because they seem to be really simple questions.But as ever, we always know in the world investigation, sometimes a really simple question becomes really hard to answer. And, and that ability to understand and provide the evidence back, especially if you have to respond to a regulator or a court, you know, being able to do that with certainty is going to become really, really challenging for us all.

Neil, you often say AI failures are all about when it happens rather than if it happens. And you know, what should organisations in your view have in place before that happens?

Neil: And I think for me, at the at the heart of it is the language around AI resilience and taking the lessons that we've already got from how do organisations deal with challenges? So, you know, first off, let's think around kind of prepare. So you know, does the organisation actually know where AI is in use in the organisation? Do they have an AI inventory? Do they have a clear set of ownership of who owns those systems? Are they aware of the shadow AI out there?

And then thinking about all those different systems, they all have different risk inherent in them depending about how they've been used. Are they being used as core processes? Are they just kind of different side tools for the organisation? Are they exposed to end customers? So that at the heart of it is really understanding what do you have and then starting to build in the language of being able to prepare for when something does go wrong.

Fran: And Felicity, that's very much in your world, isn't it? The how does that look for compliance and risk teams in your world?

Felicity: Yeah, I mean, you're absolutely spot on, Neil. I mean, it seems almost obvious, but having that risk inventory that AI tooling inventory is absolutely critical. And then what we encourage is, encourage or almost specify is, is that risk tiering. Because when you're designing your framework or your AI framework that sits around all of this, understanding something that's low risk. So again, I've talked about those assistive generative AI tools which have human in the loop built into it, et cetera. That's relatively low risk. And then you've got your much higher risk sort of almost autonomous decision-making tooling. So assigning risk is really, really important. And then having a proportionate approach to your governance again is really key because if you under govern, you're at risk of having one of those incidents, you know, with the reputational financial regulatory scrutiny. But if you over govern, that can be extremely laborious and also kind of almost this means that you people lose trust in your governance process, so to speak.

So I think getting the balance, the proportionality is absolutely key of your risk inventory, your risk tiering, etc. I mean, I also, I use the term responsible adoption because I fundamentally believe that we've got the tech, we need to put the governance in place. But having people actually understand what AI is and why we need to use it responsibly and have that trust in it is so key. Because when they're using it, they'll make sure that they're using it in the right way. Or if they design a system or if they have an idea about a process that they can automate. And I think that actually also brings it back to that shadow AI point. If you can have a great inventory, but if you don't know what shadow AI is being used in the background, it can almost be meaningless because that's one of your, your other biggest risks is the unknown.

Fran: Yeah, let's, let's move on. When we were chatting beforehand, we were talking about the fact that, you know, this is a really hot topic for clients at the moment. And there's lots of conversations that we're all having on that sort of AI trust agenda. But we were saying there's a couple of questions that have come up more and more and we thought it'd be helpful to cover those questions. So I'll, I'll just pose those to you now as part of the session today.

Really, you know, many organisations are moving quickly from the pilot stage of AI into, you know, the life decision making stages. Where are you both seeing the biggest gaps around readiness?Maybe I’ll go to you, Felicity first.

Felicity: Yeah, absolutely. And I think it kind of covers off a lot of the things I've already said. It's making sure that you have that end to end life cycle governance framework in place, that you're not just focusing on getting something through proof of concept to production, but you're actually looking at the monitoring, testing, observability of that tool or process once it's gone live, so to speak. You don't want to, you kind of almost want to be ahead of the curve. You want to proportionately go in line with your AI journey. I mean, a lot of people seem to think that AI governance stifles innovation, but actually it harnesses it and allows you to do it confidently. The analogy I normally use is around an F1 race car. It goes faster around the track with its brakes than without. And you know, that's how that's what I align AI governance to. So yeah, my advice would be with people to, you know, have that full life cycle suite of governance rather than just stopping it once it's gone into production. Clear accountability and ownership.

Fran: And I guess that governance brings knowledge as well, doesn't it? So if you've got silos within organisations are doing who are doing some really great stuff, then having that governance process is, if it's done right, can really, you know, increase the knowledge around the business of what's going on. And that's a really good point because AI doesn't just sit in one particular function. It's not just an IT problem. This is something the entire organisation needs to have a knowledge of how it works and education, but also the risks are actually cross function as well.

And then, Neil, maybe there's a, there's another question that's been coming up quite a lot around, you know, the, the, the response side of things.So, you know, when an AI driven outcome needs to be challenged, what in your view does good look like?

Neil: All right, thanks Fran. I think so. The big thing in my mind is that when anything gets challenged in an organisation, does it have a muscle memory about how it's going to respond to that moment, that crisis event? So if we leave that thing around like being prepared for that failure. So okay, so something does get challenged. Well, what do you do? Do you deem it to be a full-scale crisis event or is it actually a small kind of internal investigation? You need to establish the facts to be able to do that. You need to know where you're going to look, who you're going to speak, who's going to run it. So it's building that muscle memory. And I think one of the big tips that I would give any organisation is to take the lessons from the journey that they've already been on from cyber. So cyber issues a long time ago felt like a very technical issue, but now are known to be a board level issue and people have that awareness.

At the same time, one of the things that organisations did was that they ran simulations for cyber failure. And what we're seeing mature organisations do at the moment is start to conduct AI based failure exercises, tabletop simulations of just to how would the organisation respond. And again, that comes back into that kind of responsible use of AI and think about the governance because if there is that inevitability, they just need to be prepared to respond.

Fran: Thanks for that, Neil. I mean, that's been absolutely fascinating. I've really enjoyed that conversation. We've, certainly covered a lot of ground, just before we bring things to a close, as usual with these sorts of discussions, I'd like to get your closing remarks. So I wonder if Felicity may be turning to you first, what are the two or three key things for our viewers to take away?

Felicity: Sure, absolutely. And I think, I think the key things for me is to recognise that AI is moving at pace and in in order to kind of harness the value that that pace is bringing, is that you need to make sure that you have your right governance framework set around it in your organisation. Otherwise you, you know, you don't want to react, you don't want to be reactive to or just beat bare minimum compliance.You want to be able to, you know, get the most out of your AI journey confidently rather than having regulatory scrutiny when something's gone wrong, et cetera. And what we're seeing is the organisations that sort of get this right or, you know, it's not an exact science and no one knows the answer to everything at the moment, but that they are trying to really wrap their arms around that. And it's part of their innovation journey. They’re able to scale faster.

Fran: Thank you for that. And Neil, some closing remarks from you.

Neil: I think quite simply the, you know, AI is changing some of the fraud risks and it's bringing in new fraud typologies that that perhaps didn't exist before. But just because it's a threat, it doesn't mean it cannot also be an enabler to the organisation to help respond to some of the challenges of forces coming against organisations.

Fran: So thanks for that, Neil. Let's bring things to a close. So firstly, a huge thank you to Felicity and Neil for sharing those excellent insights today and to you all for joining us. I think we can all agree that AI is going to continue to transform investigations and compliance and that fraud and error is increasingly going to involve systems and not just people. And ultimately trust in AI depends on how well your business can withstand scrutiny over the use of AI.

And a good final question to leave you with perhaps is, you know, do you know where all the AI in your business sits? And would you know how to respond to an AI driven fraud or error? They're big questions.

And please do visit our Fraud Cast web hub where you can find recordings from our previous episodes and the answers to some other big fraud and compliance questions. Please do keep an eye out for our upcoming Fraud Cast sessions and please let us know if there are any topics you'd particularly like us to cover. Thank you and we look forward to seeing you all again soon.

Follow us