Artificial Intelligence in Health – the possibilities?

By Stephen Mills PwC Director Data & Analytics North

As we draw closer to our #AIHealth Hackathon (30th November – 1st December) at PwC’s new Manchester office, it is important to consider the implications of automating intelligence and what that means for patient care. In this blog in our #AIHealth series, we think about the ethical implications of AI to ensure anything that we bring into our organisations not only draws upon the benefits that this type of automation brings, but also is done in a responsible way. Something in PwC we refer to as Responsible AI.

At the heart of Responsible AI in Healthcare has to be the understanding that we are often dealing with extremely sensitive topics for the patient, something that has always required a good bedside manner from the clinician. How can AI replicate this bedside manner and make sure the patient is getting something akin to ‘the human touch’? Should AI even be involved in direct patient interactions?

Let us consider some of the scenarios where the use of AI in healthcare draws up some ethical questions:

  1. Diagnosis of conditions – The algorithms that are generated based upon solid medical foundation such as journals, historical medical records, experiments, etc. are already proving themselves to be accurate. An example of this are the apps available on phones now that are being marketed as diagnosing as accurately as a doctor. The issue is, what happens when AI gets a diagnosis wrong? Who is to blame? What if a human method, in the specific instance, would have prevailed with the correct diagnosis?
  2. Driverless ambulances – Most of us get on flights that are mostly flown on auto-pilot and we’re pretty comfortable with that as we know a pilot is there as the safety measure. But how would we feel about driverless ambulances to allow the paramedics to treat patients rather than driving them to hospital. These ambulances would use the data to even locate themselves to high-risk areas when not responding to an emergency to allow for quicker response times, however does this not raise the ethical question about robots making decisions as to who should get care and who shouldn’t? What data are they using for this? For example, if patient A had a likely longer life expectancy than patient B based on their location, age and other characteristics, should we allow AI to make the decision to respond to patient A as a priority?
  3. Allocation of appointments – There are many variables to be considered when effectively scheduling appointments for patients. Is the doctor/nurse or treatment room available? Will they turn up? Who else needs appointments, and how urgently? AI is already being shown to be an effective method for organising and scheduling appointments for patients. What if, however, the system fails to prioritise a patient who subsequently dies of their condition? Who is liable for that? How can we build preventative measures in, to ensure only a certain level of risk is acceptable?

We can see through only a handful of scenarios that there could be significant implications of AI making decisions that could either have adverse outcomes for patients, or potentially include biases that may not be tolerable in human society.

Alder Hey encountered such ethical challenges during their AI implementation, as it was important to ensure automated intelligence that interacts with children was within a safe environment, that messages delivered were right for a young audience, and language and tone was also appropriate.

Whilst the benefits of AI are clear, as part of any prototype created in the Hackathon event and any subsequent implementation, we need to bring the ethical questions to bear to ensure we are doing the right thing by our patients, and that people’s human rights are considered by any AI solution we bring into our organisations.

We look forward again welcoming organisations to the Hackathon at the end of November, and will continue our blog series with a post on the practical capabilities required to implement AI solutions. Stay tuned!

If you would like to see our opening blog in the series you can find it here.

Any Trusts or organisations’ wishing to discuss ‘AI in Health’ should contact:

Stephen Mills, PwC Director Data & Analytics North

Mobile: 07966 265 804

Email: stephen.mills@pwc.com

 

Contact us

General Enquiries
North West
Tel: +44 (0)161 245 2000
Email

Media Enquiries
North West
Tel: +44 (0)161 245 2528
Email

Follow us: