Raising Children in the Era of Artificial Intelligence – Part Two

26/01/18

 

Artificial Intelligence (AI) has the potential to fundamentally rework basic aspects of the modern world: healthcare, energy, entertainment, governance, gaming; the list goes on, as we’ve discussed in PwC’s recent AI predictions. As this process unfolds, we, the adults, must prepare our children to inhabit a future full of new challenges and opportunities. At PwC, we employ a Responsible Technology approach to emerging technology to comprehensively model this landscape.

In this second of two blogs (Part One available here), I cover issues related to jobs, health and security with input from Jonnie Penn, an AI researcher based at the University of Cambridge, who conducts research on AI’s impact on society as a fellow for Google.

 

1. Jobs and Skills

PwC research suggests that the factor most highly correlated with potential job automation is the education level of the worker that currently performs it. Hence, to ensure that our children have ‘future-proof’ skills, some form of government intervention will almost certainly be needed. In the medium term, this could include a revision of the UK’s primary-school curriculum, both in terms of technical content (i.e. data literacy, logic, problem solving) and teaching methods (i.e. a focus on interpersonal skills, emotional intelligence etc.). In addition, a deeper focus on vocational education, training, and retraining in associated areas may be be required. There are warnings, however, that this effort could backfire if made in haste. “A desire to ‘optimise’ children has led to ugly outcomes in the past,” warns Penn, “In the early 20th century, parents in California and the UK considered eugenics to be the key to unlocking their community’s prosperity. Time has shown that intuition to be both scientifically and morally misguided.” In this case, Penn states, we should listen to children and young people to ensure that they are a part of the civic process.

Robotic hand touching human hand
2. Health and Wellbeing

Children in wealthy families today might grow up surrounded by AI-powered voice assistants that sound or act human. How does this interaction influence children’s wellbeing? A research group at MIT are investigating how children perceive AI technology by studying how they interact with various virtual assistants like Amazon Alexa and Google Home. Early evidence suggests that these interactions may alter children’s perception of their own intelligence in comparison to that of the agent's “mind”. Further research will be needed to understand how such agents might serve as responsible companions for children, whether embedded in toys, games, or otherwise. “There is still no substitute for time spent playing in nature,” Penn warns. “The benefits of that activity have had more ‘research and development’ than today’s virtual assistants.” He points to the lack of knowledge around potential side-effects from digital technology usage as one reason for why Steve Jobs, Bill Gates, and other tech leaders famously banned their own children from using mobile phones.

3. Privacy, security and integrity

Data privacy and data-ethics are both hotly debated subjects in contemporary AI research, particularly when children’s privacy is at stake. In 2016, a toy company launched an AI-powered product that gave the manufacturer unprecedented access into children’s lives. One year later, it was pulled from shelves due to  a consumer backlash. If properly implemented, AI-powered toys and games could become a force for good that empower children to benefit from personalised and adaptive learning. For this to happen, however, an element of regulation is probably required these “as it is with refrigerators, cars, and other products,” says Penn, “Before products are used and not afterwards.” At PwC, we explore how to design and deploy responsible AI that meet strict ethical and legal objectives.

While it is important that we prepare our education system to meet the needs of a changing job market, health and wellbeing cannot be sacrificed to accomplish these ends. Our experience at PwC has shown that once an important topic is brought to public attention, the debate usually generates action plans with tangible outcomes. To begin this process, we should listen to our children more. As my colleague, Rob McCargow has done, we can ask them, “How do you feel about AI?” The questions might lead to some thought-provoking responses!

 

By Maria Axente - PwC AI Programme Driver, with input from Jonnie Penn, University of Cambridge (www.jonniepenn.com)

 


Contact us

Maria Axente
AI Programme Driver
Tel: (+44) 20 7804 7776
Email

Follow us