Monika Gorska: In today’s session we will be covering our EU AI Act. Over to you, Stephanie and Michelle.
Stephanie Baker: Monika, and good morning, everyone. It's a real pleasure to have you with us here today. The sun is shining in London—I hope it's shining wherever you are as well—and hopefully the start of a good day and a good session. As Monika said, today we're looking at the EU AI Act, but firstly, just a small introduction to our team. So today you've got myself and my colleague Michelle presenting to you, and we're both part of PwC's digital and data team.
Our team advises clients on digital laws and regulations, including those on personal data, and we've got a strong track record of working with governments, with regulators, and with businesses to try to accelerate their thinking around responsible use of data and of technology.
We've got quite a diverse client base, lots of whom operate in multiple jurisdictions worldwide, and we're experts across numerous fields and sectors, including the UK and EU GDPR, direct and digital marketing, commercial contracts, data subject rights handling, and also the focus of today's session: AI.
So, what we're going to cover today: firstly, we'll start off with a brief introduction to the EU AI Act. We'll then go into some of the key concepts under the Act, so you've got a better idea of some of the terms that we're using. We'll then look at prohibited AI and AI literacy, and then we'll try to bring some of the topics that we've discussed to life a little bit more through a practical scenario. And then finally, we'll finish up with a few key takeaways.
But firstly, kicking off with an introduction to the EU AI Act, starting off with the purpose and goals there on the top left. The EU AI Act is designed with several key objectives in mind. So firstly, it aims to ensure that AI systems that are placed on the EU market are safe and respect fundamental rights, and that's crucial because AI systems can have a really profound impact on individuals and on society, as we've already seen. So, ensuring that they're used safely and ethically is going to be paramount.
Secondly, the Act seeks to provide some legal certainty, help facilitate investment and innovation in AI by establishing clear regulations. What the EU is trying to do is create a stable environment where businesses can confidently invest in AI technology without fearing that there's going to be sudden regulatory changes.
Next, the Act aims to facilitate the development of a single market for lawful, safe, and trustworthy AI systems. This is intended to harmonize regulations across the EU and again make it a little bit easier for AI systems to be developed, to be deployed, and also used across different EU member states without worrying about a patchwork of different national regulations. And then finally, the AI Act incorporates a risk-based approach, which speaks again to that broader aim of trying to encourage innovation without stifling it.
Moving on to scope—who and what is affected. The Act regulates AI systems and general-purpose AI models. There is a difference between the two, which we'll explore in a little more detail later on. It's also what we call a horizontal regulation, which means it applies across all industry sectors, ensuring that no sector is left unregulated, which is really important given the pervasive nature of AI technology.
Importantly, the Act applies to all AI systems operating on the EU market or that have an impact within the EU, and that's the case even if the system itself is based abroad. Extraterritorial reach ensures that any AI system affecting EU citizens or markets must comply with the Act. And then finally, the Act also covers what we call the entire AI value chain, and that's essentially all the parties that are involved in an AI system. So those obligations predominantly fall on providers, or the creators of AI systems, and also the deployers, or the users of AI systems, but we'll also have a look at some of the other roles that organizations can play a little later on. And that approach helps ensure that both development and the application of AI systems are regulated.
Next, moving on to governance and enforcement on the bottom left of the slide. The governance and enforcement of the Act will occur at both a European and a national level. At the European level, you've got the European AI Office, which oversees implementation and also enforcement of the Act, and it's also responsible for supervising the most powerful AI models, which are those general-purpose AI models I mentioned earlier. Additionally, there's an independent panel of scientific experts who also provide guidance on those models.
And then at the national level, market surveillance authorities will be responsible for monitoring compliance within their own countries. So, each member state of the EU has to designate and empower a market surveillance authority by the 2nd of August this year, and those authorities will have powers to investigate and to enforce compliance with the Act, and that includes in relation to prohibited and high-risk AI systems, which again we'll come on to in a little more detail shortly.
I'll now pass over to Michelle.
Michelle Lee: Thanks, Stephanie. So now you have a general idea what the purpose of the AI Act is and who the authorities are, so why should you care about it? Well, we all know that penalties are very strong deterrents for organizations, so now I'm going to dive into what the penalties are for non-compliance here. The structure is quite similar to GDPR, if you're familiar with it, in the sense that it's tiered. So, fines could go up to 35 million or 7% of the global annual turnover if you use banned AI applications, whichever is higher. And then you also have up to 3% for violations of audit obligations, and interestingly, we also have 1.5% if you supply wrong information to the authorities.
But if you are working in a team or you're in a startup, this applies a little bit differently for you. So, the fines for the above are subject to the same maximum percentage or amount, but whichever is lower in this case. One thing to also note is that while the enforcement of the Act sits with the national authorities—so as Stephanie mentioned, the market surveillance authorities—individuals here who are affected by the AI system also have a direct right of action. So, for example, they can bring a claim against you if they have a privacy violation, or they felt they've been discriminated against.
And now we can move on to the next slide. Thanks, Stephanie. So, we're taking a look at the implementation timeline here, so you know what to look out for and when your obligations kick in. Some of you may already be familiar that since 2nd of February this year, the provisions on prohibited AI and AI literacy kicked in already, so by right you should already be in compliance with these requirements. We'll dive deeper into these later as well in the session.
And then moving on to August this year, you have obligations on general purpose AI models, and in this instance, you also have the enforcement provisions and also the establishment of the authorities. Some clients have asked us, does this mean there'll be an enforcement gap? So technically, right now, is there anyone that's enforcing the prohibitions on unacceptable AI and AI literacy? Well, the answer here is technically no, because you should already be complying with this, because as I mentioned, the obligations can be privately enforced. So, remember, individuals also have the rights of action too.
And then we move on to February 2026. This is an interesting time because this is when the EU Commission is going to give you guidelines on high-risk AI systems and also post-market monitoring. This would be helpful because they'll give you examples of what is considered a high-risk AI system and what is considered not a high-risk AI system, and this is good for identification because in August 2026, if we move on a little to the right, your obligations on high-risk AI systems will kick in as well. And these are those that are currently in Annex 3, which we also go through a bit later. Also in August 2026, your transparency obligations will kick in. So, let's say if you're using something that generates AI content or you have an AI system that interacts directly with individuals, you need to let them know.
And in 2027, which seems like a long time but not really, in August we have rules for high-risk products and safety components of products. So, I think the focus here is on safety as a priority. So, imagine if you're using a self-driving car, it's going to use an AI system—you want to make sure that it's safe before consumers are willing to step into the car.
And lastly, by the end of 2030, that's when your AI systems which are critical components of large IT systems are going to kick in.
All right, so now we have a little warm up. Now that you have a little initial understanding of the EU AI Act, you will see a little Slido pop up on your screen in a bit.
So here we have a true or false question. The statement here is: the AI Act will apply to a US-based company offering an AI system to EU customers. So, you can vote true or false, and we'll give everyone a little bit of time.
Awesome, great. A majority of you—in fact, all of you—answered true, which is right. The Act itself is a market-based legislation, which means that it will apply to any company that's offering the AI system to customers in the EU. So equally, let's say you're an EU company and you are placing the AI system in the US or Asia, then the Act wouldn't apply to you.
So now I'm going to turn to the key concepts of the AI Act—what the AI Act is and also the different roles organizations can play. So, passing it back to Stephanie.
Stephanie: Thanks Michelle. So, we've already said quite a few times now that we'll come back to certain concepts and certain terms and tell you a little bit more about what they mean, and that's what we're going to do in this section. So, starting off with the different roles that organizations can play. Firstly, the AI Act will apply to both public and private actors both inside and outside the EU as long as that AI system is placed on the EU market or affects people in the EU. It'll concern both providers, for example a developer of a CV screening tool, and also users of high-risk AI systems. In that context it could be a bank potentially using that CV screening tool as part of their recruitment processes. Importantly, it does not apply to private non-professional users. In general, the AI Act will distinguish between the following roles.
Firstly, providers. That's any person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark. Couple of things to draw out there. Placing on the market refers only to the EU market and that means being the first to make it available on that market. Putting into service also only refers to the EU, so that means the initial supply of an AI system for first use directly to a deployer or for own use by the provider. In both cases, the activities presumably must have been directed towards the EU, but if there's a kind of non-intentional spillover into the EU market, that won't necessarily be sufficient. It also doesn't matter whether the AI system is offered for a fee or free of charge.
What still is not super clear is the term development and what actions are specifically covered by that term, but at the moment what we're seeing is generally quite a broad applicability to that term.
Next is deployer—deployer is any person, public authority, agency or other body using an AI system under its authority except for where that AI system is used in the course of a personal non-professional activity. That means that there needs to be a certain degree of control for that criterion to be met.
Next one is importers. That's any person established in the EU that places on the market or puts into service an AI system that bears the name or the trademark of a natural or legal person established outside the union. And then finally, distributors. Any person in the supply chain other than the provider or the importer that makes an AI system available on the union market without affecting its properties.
What's key to note here is that this role concept isn't necessarily going to be fixed, and there might be situations where a distributor, for example, is suddenly considered a provider and would then need to comply with all of the provider obligations. So, it's really important that you're clear on those distinctions and clear on which role your organization may play.
A term that you have also heard us using a lot in this session already is AI system, and it's really important to note that the AI Act doesn't apply to all systems but only those systems that fulfil the definition of Article 31 of the Act. The definition's just up at the top of the slide, it's quite wordy so we can break it down a little bit.
Firstly, we've pulled out varying levels of autonomy and may exhibit exactiveness. What does that mean? We're looking for some degree of independence of actions from human involvement and if capabilities of the system to operate without human intervention. We might also see the AI having self-learning capabilities so it can change while it's in use. On the right-hand side here, we've actually got a real-life example. We've used a spam filter to try to bring to life some of these points a little bit more. So, for that first criteria we've got as for a spam filter, you can see that it operates without human involvement, and it also refines through feedback.
Next one is explicit or implicit objectives. What we're looking for there are objectives that are explicitly stated by humans or that are implicit in the tasks and the data. The objectives may also be different from the intended purpose of the system. Back to our spam filter example, obviously the objective there is for the filter to identify spam email and other similar messages.
Next one is infers how to generate outputs. What we're looking for there are AI systems that can create outputs and derive models or algorithms from input or data through techniques like machine learning or logic and knowledge-based approaches. Importantly, that inference needs to go beyond basic data processing, so it should enable the learning or reasoning or modelling of the system. Again, for our spam filter, the filter itself is taught to recognize messages by seeing lots of examples of spam emails, and hopefully it's learning patterns in terms of how to distinguish them.
And then the last one we've got there; we're looking for an influence on the physical or virtual environment. That's going to be the context in which the system operates. And then again with our spam filter example, hopefully the influence that we're seeing is fewer spam emails in our inbox.
Essentially, the definition of an AI system could encompass quite a wide range of systems. Determining whether a software system is going to be an AI system really needs to be done on a case-by-case basis and also based on the specific architecture and also the functionality of that system.
Then importantly, there are differences between AI systems and also general-purpose AI models, which are another key concept under the Act. So how do they differ from AI systems? They're sometimes referred to as foundation models and they're characterized generally by their use as a pre-trained model for AI systems, with the AI systems being a bit more specialized. For example, you could have a single general purpose AI system for language processing, which could then be used as a foundation model for a number of other AI systems. It could be used for chat bots, for ad generation or for translation. Essentially, general purpose AI models form the basis of a range of downstream AI systems. So, what that means is that the Act really needs to ensure that those general-purpose AI models are going to be safe and trustworthy if they're then informing all of those downstream systems.
For our formal definition, again we're looking at Article 3, again it's quite detailed so we'll break it down into some key components. Firstly, we've got a model that's trained with a large amount of data, so we're looking for a vast data set from various sources that will include a diverse array of information. We're looking for data that will help the AI learn patterns, relationships and knowledge, that can then be applied to different situations. Again, we've got a bit of a practical example on the right-hand side here. We've used OpenAI's GPT-4, so for that first criteria we know that it has been trained on a diverse and extensive data set that includes books, websites, and other written materials.
Next one that we've got there is significant generality. What we're looking for is AI that can perform well across a broad range of tasks and domains. We're also looking for an ability to adapt to different contexts and hopefully provide useful outputs in those different scenarios. Again, with GPT-4 as an example, we know it can answer questions, it can write essays, it can generate creative content, and it can also translate languages. So, there we see it performing across a broad range of tasks.
And then the third one we have is integrated into a variety of downstream systems. We're looking for a model that can be embedded into different software and different applications to enhance their functionality, and essentially it can be used as a component of other products and of other services. Again, our GPT-4 example, we know it's used in many other applications, it's used in other chat bots, it's used for virtual assistants, it can also be used in content creation tools and also educational platforms.
One thing to note is that these general-purpose AI models are regulated slightly differently under the AI Act. While AI systems are regulated in that tiered approach based on the risk, which Michelle is going to come to very shortly, general purpose AI models have a different compliance regime with even more obligations applying to general purpose AI models which are considered to have systemic risk. That, for example, could be models that are very capable, or which could have a really significant impact on the market. For example, providers of general-purpose AI models need to document technical information about their models, and they also need to make that available to their downstream providers. They also need to have a policy in place to comply with copyright law in the EU and they also need to maintain a summary about the content that's been used to train the model. In addition, if you're a provider of a general-purpose AI model that poses a systemic risk, then those models need to be notified to the European Commission, and providers also need to assess and mitigate any risks stemming from those models, report serious incidents and also ensure adequate cyber security of those models.
Right, we'll come to prohibited AI now and I'll pass back over to Michelle.
Michelle: Thanks, Stephanie. So, you just mentioned things like prohibited AI, high-risk AI, so we're going to go through the different tier levels. The AI Act follows a risk-based approach. What this means is that the more significant the risk is, the more obligations you have. What we're seeing right now is most of our clients are already carrying out risk assessments to know which bucket their AI system falls under, so they know what to prepare for.
First, let's take a look at prohibited AI systems. This means that these AI systems are banned outright because they involve harmful use of AI that would contravene EU values. For example, they violate fundamental rights like your right to privacy, freedom of expression, etc. After that, we're going to go into further details of what this could look like.
On the right side, we have high-risk AI systems. One thing to note is that the use cases are listed under Annex 3, which we'll talk about as well, but that doesn't mean that just because the AI system is listed under there it's automatically high risk. For it to be high risk, you need to conduct a test to see whether it poses a significant risk to harm people's health, safety, or their fundamental rights. If you think your AI system doesn't pose a significant risk, you need to notify the relevant authority and then they have three months to object.
Some clients ask us, "What do we do while we wait?" You can launch your AI system while waiting, but there's the risk that you might be penalized if you misclassify it. Whether or not you launch it during the three months waiting period would be entirely dependent on your risk appetite.
What happens if it's a high-risk AI system? If it's high-risk, then you have to comply with certain mandatory requirements and also carry out a conformity assessment. A very quick overview of what conformity assessment is: essentially, it's a process that demonstrates whether you have the consumer protection requirements, integrity requirements fulfilled, and if not, what measures you can take to remediate that. Some of these assessments need to be performed by an independent third party. For example, if you're using the AI system as a product safety component. But in most cases, a self-assessment will likely be sufficient.
Moving on to limited risk AI systems. Here, if you're using a limited risk AI system, you're subject to transparency obligations. What this means is that you need to let individuals know they are interacting with the AI system so that they can make informed decisions themselves. For example, if you have a chatbot, transparency requires letting the users know that they're speaking to an AI-powered machine. Other examples include what Stephanie mentioned, like a spam filter, or an AI-enabled computer game, or even using it for inventory management within your organization.
Lastly, we have low-risk AI systems. This is what we call the happy bucket because there's very little to no perceived risk here, so there are no formal requirements stipulated by the Act. This is because there's no personal data involved or there are no predictions that influence individuals. The good news here is that the EU Commission thinks that most AI systems will fall under this category. That's good news for our clients.
Moving on to the next slide, here we have examples of prohibited AI systems. A gentle reminder: these are already in force, so please do make sure you're not using any of these. We've set out some of the examples here, but if you would like more detailed examples, the EU Commission also has guidelines that you can refer to. A quick reminder here that although it's pretty obvious, there must be an AI system involved for this one. For example, if you're carrying out emotion recognition by using a tag with if-then rules and there's no pattern recognition, then it's not covered because there's no AI system.
To quickly go through some of the examples: the first one here is AI systems that use manipulative techniques. For example, an AI chatbot that's impersonating your friend and telling you to do things you normally wouldn't do—that is banned. We also have biometric categorization which infers your sensitive attributes. An example would be trying to deduce someone's race based on their voice or someone's religion based on their tattoos.
Next, we have emotion recognition specifically for the workplace and educational institutions. In this case, you're not allowed to use webcam systems to track whether your employees are happy, angry, or sad that day. You also couldn't use things like web scraping—some tech companies try to use web scraping of facial images or CCTV footage—these are not allowed as well.
Moving on, we have a little pop quiz here. You'll have a slide on your screen soon, and what we're trying to do here is, if you could identify which of the below is a prohibited AI system. The first option is an AI system used by a manufacturing company to monitor workplace safety and hazards. The second is an AI system used to monitor employees' behaviour and traits so you can allocate tasks accordingly. The third is for a bank to evaluate your creditworthiness, for example, looking at bank transactions, records, and employment history to check whether they want to give you a loan. The last one is an AI system used by an event planner to scan attendees' facial features to deduce their racial origins and then identify potential dietary requirements.
We give everyone a little bit of time to identify which one of these are prohibited AI systems. The right answer here, and we'll reveal it in the next slide, is number four. Most of you picked this, and the reason why it's number four is because there's biometric data involved—facial features—and you're trying to infer sensitive attributes, in this case, racial origin. I understand some of you also picked option two. This is not prohibited because it's only prohibited when it's related to employees' emotions, and in this case, its behavioural traits, so it doesn't fall under prohibited AI here.
Moving on to the next slide, here we're going to briefly talk about examples of high-risk AI systems. One thing to note is that it's not enforced now; it's going to be enforced in August 2026. But we thought it'd be important to highlight this because AI development typically takes time, and it helps to inform your business strategy if you know what high risk is and what's not high risk at this point. One thing to note also: this list is listed in Annex 3, but it's not a once-and-for-all list. The EU Commission can review and propose amendments, so this is a live list essentially.
What is considered high risk doesn't just depend on what your AI system can do; it also depends on what you're doing with the AI system. You would see that most of these purposes and uses are related to public functions, but some of our clients have local authorities as their respective clients as well, so some of these use cases might be more relevant to you. Again, just because the use cases are listed here, it doesn't mean that it's automatically high risk. You need to do the risk assessment to see whether your AI system poses a significant risk or not.
Taking a quick overview here: the first one, non-banned biometrics, includes examples like remote biometric identification systems. For example, your bank app wants to identify whether you are who you say you are. The reason why it's high risk here is because the risk of discrimination is pretty high. We also have education and vocational training—here, you're trying to assess what type and level of education you need to provide an individual. The next one is quite important: it relates to employment and workers' management. You're trying to use the AI system to assess whether you should give an employee a promotion, terminate a contract, allocate tasks (which is one of the examples we had earlier), and monitor their performance.
The next few are quite relevant to law enforcement. For example, law enforcement might want to use AI systems to assess whether evidence is reliable or not, or in migration cases, to check whether they want to provide you a visa or assess your complaints relating to your eligibility.
Stephanie: Thanks, Michelle. So, we're moving on to AI literacy now. As of the 2nd of February, Article 4 of the Act is also now in force, and that relates to AI literacy. The term itself does seem like it's quite self-explanatory, but the full definition is in Article 3 of the Act, and that's the definition of AI literacy itself. What that means is skills, knowledge, and understanding that allows providers, deployers, and affected persons, taking into account their respective rights and obligations in the context of this regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
We've highlighted just a few points in that definition that are key to understanding exactly what AI literacy is, and then we can take a closer look at what's actually required under Article 4, which is now in force.
Under Article 4, providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education, and training, and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used. We'll have a look at some practical steps for how to achieve AI literacy shortly, but first I just wanted to flag some other relevant points there.
Firstly, and Michelle alluded to this at the beginning of the session, even though Article 4 is officially in force, it's not likely to actually be enforced until August this year. That's because the market surveillance authorities haven't yet been designated by the EU member states, and while technically private enforcement could happen in the context of AI literacy, it's probably not very likely to happen.
The next thing I wanted to flag is something called the AI Pact, which is a voluntary initiative to encourage organizations to prepare for the full implementation of the Act. As part of the AI Pact, organizations are encouraged to collaborate and share their experiences and knowledge. That includes best practice, it could include their internal policies that may be useful for other organizations who are also going through that compliance journey. Depending on participants' preferences, those best practices could also be published online in a platform where the AI Office shares information about the implementation of the AI Act. This has been particularly interesting in the context of AI literacy because it means we've got a really good idea already of what organizations are already doing in that space.
Practically, what can you do to implement AI literacy in your business? Firstly, ensure there's a good general understanding of AI in your business and consider how it's currently being used. It's really important to document the systems that are currently in use and also identify the people that are involved in their operations.
Next, consider setting some AI literacy goals and priorities based on those risk levels associated with the AI system that's in use. For example, if you're using any high-risk AI systems, you're probably going to need more rigorous AI literacy compared to low-risk AI systems.
Remember, there's not a one-size-fits-all approach, and that point has actually been very strongly reiterated by the AI Office. What your organization needs to do in respect of AI literacy is context dependent. It can also take into account factors like your size and your financial resources, so there's no one-size-fits-all approach.
Next, consider implementing a variety of strategies to meet AI literacy. E-learning modules can provide flexibility and accessible training for all employees, or you might use some internal webinars to discuss specific AI systems and the implications of using them. You might even want to conduct regular surveys among your employees to help evaluate if they're actually benefiting from your AI literacy measures or if you need to think about something else.
Finally, there are plenty of resources out there, and we're very happy to help you find them as well. The one we found particularly interesting at this point in time is the AI Office's living repository, which is essentially examples of AI literacy practices that are already being undertaken by other organizations who have pledged the AI Pact, which is what I mentioned earlier. For example, we can see that there are some companies who've already developed a specialized training program, including videos and podcast series, and then we've got other organizations who've divided their learning sessions into some for complete beginners and also some for more experienced individuals. A really helpful resource for us to have.
Over to you, Michelle.
Michelle: Thanks, Stephanie. So, I know we went through quite a few technical elements of the EU AI Act, so I hope with this fictitious scenario we can tie everything together and also paint a clear picture for you on how the Act could apply to an organization. I'll read this rather slowly so everyone can have some time to absorb the information here.
Here we have a company called Global Tech. It's a multinational tech company based in Germany. What they're trying to do is create an AI system to evaluate their employees' performance. The AI system here is going to analyse data points like an employee's keystrokes, how they move their mouse, and email communications to assess productivity. Then, using this information, the AI system will give an output of whether this person should be promoted, maybe given a notice, or if other HR decisions should be made.
During the trial run, there will be five employees that will operate the AI system. You have a contractor here, who is an expert in AI that's brought in for this purpose, and you have four other people supporting this employee who work in the IT department and have some general knowledge about AI systems. What Global Tech is planning to do is launch the AI system and test it on 20 employees first in Germany, and if the trial goes well, they're going to launch it across all the offices in Europe, the Middle East, and Asia. If it goes really well, they plan to license it to other companies as well.
How does the Act apply? Firstly, it's important to consider whether the Act applies at all. In this case, yes, it does, because Global Tech is developing this AI system in Germany and they're going to put it in service, so yes, it'll fall under the Act. Then we'll look at what roles are involved here. First, we have the provider—Global Tech will be the provider because they're the one developing it and also placing the system into service. For deployer, Global Tech is also the deployer because they're going to use it within their EU offices for the EU employees. But other companies that are based in the EU who are granted the licensing rights to use the AI system will also be considered the deployer, so they will also have deployer responsibilities.
We went through the risk tiers earlier, so which risk level does this AI fall under? Because this AI is going to be used for employment management in the sense that you are judging whether someone will be promoted or terminated, this is a high-risk AI system. Again, it's very important to assess the risk here to the employees, but in this case, it's significant because of the risk to privacy.
What should Global Tech do? They should start preparing for their new obligations because it's going to kick in August 2026. As Stephanie mentioned, AI literacy obligations will be in this case too. Again, no one-size-fits-all, so for example, Global Tech might need to provide different levels of training materials—specialist content for the AI expert and non-specialist content for the other employees. This could take many forms: e-learning, webinars, game-based learning. Also, because of the licensing here, they will need to provide manuals to the clients that they're licensing to, so that they know how to operate the AI system, what the benefits are, and also, very importantly, the risks of the AI system that they're going to give out.
Stephanie: Thanks, Michelle. So just to take us through a couple of key takeaways from the session, because I think we're both very conscious that there's a lot of information here.
A key point for you to take away from this morning:
That brings us to the end of the session. Thank you so much for joining. If you want to get in touch with Michelle or with myself, please feel free to reach out. We've got our details up on the slide there. Our team also publishes quite a lot of thought leadership. We also have a monthly newsletter called Privacy Pal, so if you are interested in hearing more from us, please don't hesitate to reach out. Thank you all for joining.
Monika: Thanks so much, Stephanie and Michelle, for hosting us today. We do hope that you found the session interesting. We will have a recording of the session as well as the slides up on our website in the coming days, so you can definitely access those there. Thanks for your time and we'll see you at the end of April. Thank you.