Many industries are in the midst of a data revolution. Businesses are going digital and there’s a wealth of rich new data and technologies available to generate value and increase efficiency. One of those resources is AI. In the latest PwC CEO Survey, 72% of UK leaders think AI will significantly change the way they do business in the next five years. But, the question is: will that change be of their own accord or at the hands of the regulators?
Our research shows that AI is predicted to generate $15.7 trillion to the global economy by 2030. Regulators are investing in these technologies to support their duties and stay on the top of the techniques. But where and how does the responsibility to govern fall?
The depth of regulation for AI applications needs to be firmed-up. A third of UK CEOs interviewed believe that governments should limit regulation around data protection rules to facilitate the development of AI. 47% of organisations adopt an even more radical stance and think they should be allowed to self-regulate the use of AI. CEOs look divided and greater clarity amongst the role of policy makers and industry regulators needs to be reached.
And the scope is broad, far beyond the assembly of an algorithm. Unbiased and ethical applications of technology need to be addressed. For example, how do you regulate facial recognition in policing? What about the level of transparency and explainability algorithms should adopt when banks agree or deny a loan or insurers price their policies?
Regulators need to be agile. Recent headlines around data breaches have made some question whether regulators have been too slow to keep up with the speed of change. Start-ups have forced regulators to react but when you look outside of data privacy, the rules are limited.
The key lesson is that the journey of learning is continuous, it needs to be embraced at pace and across complex ecosystems of stakeholders.
In financial services, hackathons and sandbox initiatives play a significant catalyst role in bringing together regulators, academia, industry representatives, tech communities, customers’ groups, and public bodies around key systemic challenges (e.g. tackling financial crime, anti-money laundering detection).
Innovation can be tested in ‘well-defined environments’, where risks of compliance breaches and undue operational impact are mitigated. Firms can receive regulators’ steer early-on, address concerns quickly and benefit from a diverse network of expertise. Experience shows it allows businesses to challenge the status-quo and step-up from daily operational challenges.
International cooperation and networks for regulators must gain pace swiftly as to address similar issues and challenges across jurisdictions. The UK Financial Conduct Authority (FCA) is at the forefront of Regtech innovation with the unveiling of a Global Sandbox testbed which helped spark the launch of a Global Financial Service Innovation Network. This helps foster the testing of cross-border applications and has created an ecosystem of dialogue on the betterment of regulation.
We’ve seen it often with our clients, if organisations are late to collaborate on industry-wide initiatives and open up to networks of peers, they’ll need to catch-up. When it comes to engaging with regulators, organisations may not have their say in the development of future policies and standards if they’re late to the game; now is the perfect time to get involved before expectations firm up. Being part of a rich ecosystem also allows the opportunity to challenge the internal status-quo, unlocking and scaling innovation.
Our survey results told us that nearly two thirds of UK CEOs believe governments should play a critical and integral role in AI development. I say we cast the mirror on ourselves and look inwards for solutions. What do you think?
To learn more about AI at PwC, visit pwc.co.uk/ai.
Fabrice Ciais, Director of Artificial Intelligence (AI), Technology & Investment
70% of UK leaders believe governments should individually develop a national strategy and policies for AI.
Which areas of AI and regulation would most help - or hinder - your future plans?