While there is a constant need to evaluate and adapt, it’s still vital to be clear about your direction of travel.
Adoption of AI will be an emotive subject both within your organisation, with your customers and in society as well, so it’s important to consider how you will build stakeholder trust in the solution.
It’s important to build the controls framework into the solution up-front rather than being designed and applied once systems are developed and in operation. This includes a mechanism to monitor outcomes and compliance.
AI should be managed with the same discipline as any other technology enabled transformation.
There are many possible models for developing an organisation-wide AI capability ranging from a centre of excellence and a dedicated board member to a ‘develop-and-steer’ strategy. Whatever approach you adopt, it’s important to ensure cross-organisational communication, collaboration and centralised co-ordination of AI initiatives.
If data is the new IP, it’s important to put in place mechanisms to source, cleanse and control key data inputs and ensure data and AI management are integrated.
If information is power, then AI is its zenith. Yet like all power it needs to be applied with insight, sensitivity and responsibility.
Assurance over AI isn’t a one-off. You should assess the risks and opportunities as your AI platforms evolve.
Assurance over AI also involves more than just embedding new technology into operational processes. It requires business-wide evaluation to gauge outcomes, identify emerging risks and look out for opportunities.
Drawing on our wide-ranging research and work with clients, our responsible AI framework (Figure) is designed to provide transparency over the viability of the AI implementation project and confidence that the controls are in place to ensure that the business outcomes meet expectations.
Director – Technology Risk (Emerging and Disruptive Technology), PwC United Kingdom