Transcript: A - Z of Tech Episode 5: E for Ethics

Louise Taggart

Hello everyone and welcome to what is now episode 5 of the A-Z of Tech podcast. Today we are looking at E for Ethics and for previous listeners you may notice there is a slight change and my co-host is now Hugo.

Hugo Warner

Hello everyone! My name is Hugo Warner and I sit in PwC’s disruption team. I’ve come here to fill Felicity Main’s big shoes.

Louise

It’s a pleasure to have you here. And today, we are joined by Ollie Buckley who is executive director of the new established Centre for Data Ethics and Innovation and Maria Axente who is the AI programme driver here at PwC and is part of the team who is building responsible AI globally and advising clients on its ethical use.

One thing that has emerged over the past few podcasts we’ve been recording is that technology actually attracts people from a very diverse range of backgrounds and career paths and I’d be really interested to hear about where you both came from and how you go here. Maria?

Maria Axente

That’s a very good question and an interesting one.

I come from Romania, my mum is a maths and physics teacher and I’ve been surrounded by science my whole life and at some point in my early life I started watching Star Trek and Star Wars. I became fascinated – not so much with technology itself – but how humans interacted with it.

I ultimately ended up studying business and I had a career focussed on business but technology kept on coming as a theme.

When I joined PwC, I joined the digital practice and I’ve spent some time helping clients make sense of technology. When we set up the Ai centre for excellence, I joined the centre. I’m really keen to help children, young adults and young girls.

Louise

And Ollie, what’s your journey been to the CDEI?

Ollie Buckley

That’s a very good question. I guess I have been a lifelong lover of gadgets and technology so I’ve always been interested in tech. But I pursued from an academic point of view, I did philosophy and politics at university. I then went and worked as a management consultant for a bit and form there I moved into government. I pursued a sequitis path around government policy before ending up in Government Digital Service. I think what was particularly eye opening about the experience was that I got to see how technology was having really profound impacts for people.

Louise

In the context of this podcast, which is tech focussed. Ollie, why is it important we’re talking about ethics?

Ollie

Because we’re talking about enormously powerful, transformative technologies that have the potential to deliver huge amounts of public benefits. But because they’re powerful, also need to be handled responsibly.

So it’s really important when we go about designing and deploying these things, we are constantly asking ourselves, ‘are these consistent with our values?’

Louise

And Maria, you would agree with that?

Maria

Definitely.

To build on what Ollie said about values, ethics is ultimately the values we operate by. Ethics is how you define principles that allow us to determine if a course of action is good or bad.

In the context of a new technology, that is set to shake humanity to its core, we will develop systems that will operate in an independent manner, autonomous from us, alongside us, will work and live alongside intelligent mahines.

In this context, when we delegate some of our daily tasks there is an expectation for the machines to operate by those values. In the context of values being very personal and very subjective, the question arises ‘Which values need to be incproated in the system? Who makes that decision, myself, the developer?’

It’s important to have a public debate on it and also a way to inform the actions at various levels in society, public, business alike who have a huge responsibility because ultimately businesses and industry as a whole develop those technologies.

Hugo

Fantastic.

So we have the CDEI, a new part of government, are you able to talk us through the centre and where it came from?

Ollie

Yes, certainly.

At the highest level, the centre exists to advise the government here on how we maximise the benefits of AI. We do that by connecting disparate communities, by talking to civil society, to leading academics, and also really importantly to the general public. To understand.what are the rules of the game we should be following here and how do we ensure we are giving those that want to innovate the ocnsifdence to do that but also ensure the public can trus what is happening and make it clear there are people looking out for them?

Louise

And making sure that businesses and services that ultimately they’re the end users of.

Ollie

Absolutely.

Louise

And Maria, from your perspective working with clients globally what are some of the issues and challenges you’re helping them tackle?

Maria

First of all, let’s step back from ethics and look at AI in general. I think not just in the UK but globally there is a need for proper education and demystifying of what AI is. AI is a powerful technology but in the same time. Not everyone needs to know what deep learning and machine learning is. What we need to know it how it operates.

If there is any way to check if what’s important – the values – are embedded and how they’re embedded

Clients have started to understand that those values need to be embedded in the autonomous system. But when we started looking at the job, we realised it’s much more complex than data and algorithms. It’s about fairness and bias. I think the media is flooded – which is a good thing – or how various tools from voice assistants to robots are gender bias and triggered the attention that in fact, when we build tools we are using historic data. And the data is bias because we are bias.

No matter how many checks you put in place, those tools can go wrong because they seem to mirror our biases. This is where our clients are, they realise the task, the importance and the complexity. They’re understanding they need to disrupt themselves. And going back to the questions we are within PwC, how do we transform, disrupt in a way that is sustainable for the business and what’s their role in society.

Ollie

And I think there’s something interesting in that on the one hand, we rightly appreciate the power and potential of these tools. But on the other, we need to be really conscience of their limitations. They’re amazing but they’re only partial replications of human intelligence and there’s a real danger that by being awed, you fial to understand the things you need to look out for.

Our underlying philosophy at the Centre is that there is, these are fundamentally technologies that will benefit society. But they will benefit society by us being aware of their limitations. On the one hand, we know there are risks – for example, as Maria was saying about the potential to perpetuate societal biases. But they also offer opportunities to resolve these because as we all know, humans are very imperfect decision makers. We can actually look to these technologies to help with some of the imperfections in our own approaches.

Louise

So how do we actually go about codifying or setting up frameworks for something that in many cases might be an unconscious biases that we might not know we have?

Ollie

So, one thing I would say is that we are not starting with a blank piece of paper. There is an existing legal framework that surrounds this stuff. We have already decided as a society that there are certain types of protected characteristics which you should not take into account in important decisions in people’s lives, including to give them a job.

So, step one is make sure that you are not including those sorts of biases in the systems you are developing. Where it gets more challenging in the case of more complex AI systems is in the risks that you might be including those biases without realising you are. So, for example, you may not be including gender explicitly in the way you are processing CVs but the system can use and identify proxies you are using. So you’re not explicitly saying this candidate is male and this one is female but it’s learnt enough from the details of the CV to give a pretty good indication of which one you are.

That’s where we need to think about more sophisticated approaches to address these things.

Maria

And I think when looking for a solution, there are two layers to this.

One is the technology and we’ve been working on a set of tools to address bias in algorithms. But the other is a process level, a cultural level. As Ollie said, some of the time, most of the time, the bias is at an unconscious level. This is how we operate as humans so how do we ensure that when we go into design, that there is something when developing these tools that we have enough diversity. It has been scientifically proven that diversity helps us being aware who we are and make us aware of our inherent biases.

We are doing an excellent job at looking for biases at a technological level. The more complex part comes into place when we assemble the teams. It’s not just about bringing about the right skills. The biggest problem the AI community faces at the moment is the lack of skills, the lack of female researchers. A study published by UNESCO just yesterday found that only 12% of the Ai researchers are women. We need to start addressing this at early stages of education. How do we empower more young girls and women to join technology careers.

Hugo

Which has interesting results. UNESCO also spoke about some of the unforeseen consequences of home assistants, saying they’re programmed to give flirtatious responses to demeaning questions being asked by users. So it’s an interesting example of biases not only being baked into the algorithm but actually reinforcing them in the real world.

I wonder, is that something explicitly on the CDEI’s agenda?

Ollie

So bias certainly is. I can tell you a bit about what we’re up to over the next 12 months.

Essentially, we’ve got two sets of activity that we’re organising ourselves around. The first is called ‘analyse and anticipate’ and the objective there is to take a look across the landscape and say where do we see the biggest emerging areas of opportunity and risk and to advise on where therefore we think we should be paying more attention. That’s looking across sectors.

The other is two longer term reviews. These will be conducted over the next 12 months. The first of those is looking at online targeting and trying to get a sense form the public how they feel about the way their data is used to personal their experience online and where they feel that’s comfortable and legitimate and where they have concerns. That will give us a sense of where rules might need to change.

The other area is the potential for bias in algorithmic decision making and there we’re taking a sector based approach. We are looking as FS, at the crime and justice area where we’re seeing more tools being developed to help police forces and we see in the US systems being used to inform judges about the best kind of decision make about an individual. Later on we’re looking at HR and then social services. These are all areas where the use of these systems could have big impacts.

The reason we’re taking a sector specific approach is because context is incredibly important here so you may care about different things in different environments.

In terms of the specifics on the nature of voice assistants, we are actually just about to stat some work on smart speakers and we’ll be producing a short report on that.

Maria

If we allow only the tech to do this job. We might have just one actor representing different interests. We need to balance that with what society thinks ethics means.

Hugo

And on that, it’s the UK being the lead on that but most of the technology is being developed outside of the UK. You think of the dominance of Silicon Valley, of China and their speed of development. So how can we act as an adult voice in the room on the debate about how those technologies are advancing?

Ollie

So in a sense, I think part of our claim is that the UK punches weigh above its weight in the development of AI and data driven tech. We’re second in the world for the development of high quality research, we have a credibility that comes with the fact that we’ve got some fo the best world leading experts in the tech here. The other thing the UK has got a reputastion for globally is in the setting of standards and in the development of pragmatic, proportionate regulation of new technologies, of FS. We’re a global centre for professional services. For law, we have fabulous research universities, strengths in the social sciences. We live in a world where suddenly doing a philosophy PHD gets you a high paid job!

We are uniquely placed to be a leader in this field.

Maria

And I think to add a third, it’s the engagement with business and from the beginning we set up the AI community in the AI, we are part of all those conversations. Industry is very active to engage with the government, with parliament and academia to come together to understand how the AI should be regulated.

Hugo

I think we’re all in agreement that AI solutions shouldn’t cause harm to humanity and should help us work together for the common good but how can we ensure AI considers its impact on children, young people on issues of wellbeing etc?

Maria

That’s an excellent question.

When we developed AI solultions, they will be ethical, profitable and suataibale in their usage but that doesn’t yet mean that they will contribute ot human wellbeing. When we develop autonomous syetm to live alongiside us, it will change how we interact with each other and with thos eystems and therefore there is the poitential to impact – positively or newateively – our mental and physical state.

A lot of research has been undertaken to understand, what is the impact of those autonomous systems on young people and children. Further research is needed. There is an important piece of work currently being undertaken that samples AI researchers from around the world to develop a series of wellbeing indicators to be included in the development of AI systems. That’s one of the important initiatives. Another one, is a pop up research centre part of UNIGLOBAL, and they set up a young workers lab and the main purpose is to look at how digital tools impact the wellbeing of young workers. How input is being captured and what sort of engagement we should have with young workers in developing these systems.

Louise

And Ollie, how is this being reflected in some of the work the CDEI is undertaking?

Ollie

I think that it is a foundation for everything we’re doing, that we consider how these systems are impacting people. One piece of work I would highlight is on online targeting and in particular we’re interested in looking closely at the impact that the way your online experience is personalised, the way algorithms keep you engaged, can affect more vulnerable people. That includes the very young who may not yet developed the critical faculties to be clear about what’s happening. But also, the vulnerability in the adult population, we hear horror stories about gambling addicts being targeted with adverts.

Part of what we want to do is look across the regulation of these different environments and see if the rules are strong enough.

This is part of a much wider conversation about online harms and how to address them which the government is taking forward.

Hugo

And where do you see this heading? Obviously we’ve got some very complex themes, but do you see perhaps this moving towards emergences towards common principles for society and businesses? What’s the next step for the CDEI’s research?

Ollie

Yes, we want to be highlighting how you can do this well and to give guidance on that. I think that it’s easy to forget given how prevalent the online environment is these days that these are really new inventions and we are only just starting to understand their impacts. We have a long way to go to understand these more fully. Industry is waking up to it, the fact we see these issues discussed more prominently means I’m very optimistic we can learn how to develop these healthily.

Maria

It’s also worth mentioning initiatives by the World Economic Forum Generation AI that has brought together legislators from the UK, US, Europe, and also importantly UNICEF to understand what are the key considerations for young adults and children that need to included in policymaking relating to children. We hope initiatives like this will strengthen so when we create frameworks they’re also strong.

Hugo

What I think we’d love to know is where our listeners can go for more information about what PwC and the CDEI are doing in this area.

Ollie

If you would like to find out more about the CDEI you can go to gov.uk/CDEI and youll be able to find details about our strategy and also ways to express your own views back to us. We’ve got calls for evidence out on bias and targeting and we’re really keen to get views from individuals on these issues.

Maria

If you’re interested in ethics and AI at PWC we are about to launch a responsible AI toolkit so follow us on social media for more updates. For more on young people, I’m on Twitter and I’m quite active so follow me.

Louise

Maria and Ollie, thank you very much for joining us today.

You can join us next time which will be F for Fintech and in the meantime, feel free to follow me on Twitter, I’m @LouTagTech

Hugo

And I’m @HugoWarner1.

Until next time, don’t forget to like and subscribe and tell all of your friends!

Follow us