leaders In header

Future of Healthcare: AI In Healthcare

Few technologies have been so hyped as artificial intelligence in recent years, and healthcare has not been excused this attention, with various AI-based tools providing early diagnoses of everything from breast cancer to mental health.

By: Adi Gaskell, Katerva’s Futurist 


Few technologies have been so hyped as artificial intelligence in recent years, and healthcare has not been excused this attention, with various AI-based tools providing early diagnoses of everything from breast cancer to mental health.  A recent study from Cardiff University is illustrative of the general direction of travel with AI in healthcare, with their system trained on large quantities of legacy data in order to provide smarter and more efficient risk assessments for patients with cardiovascular disease.

“If we can refine these methods, they will allow us to determine much earlier those people who require preventative measures. This will extend people’s lives and conserve NHS resources,” the researchers explain.

It’s a recipe that has attracted many of the biggest names in the tech industry, from Google to IBM, and led many to speculate that a golden age of tech-driven transformation awaits us in the healthcare sector.  A report from the Wellcome Trust outlines five key areas that they believe AI will have the biggest impact on healthcare and medicine more broadly:

  1. Using AI to make basic processes more efficient
  2. Using AI to make drug discovery more efficient and genomic science more effective
  3. Using AI to perform clinical tasks, such as diagnoses and screening
  4. Using AI to interact with patients more effectively
  5. Using AI to spot and monitor the spread of various diseases

Despite the frequent media coverage of AI applications in healthcare, the vast majority of projects to date are at an early, pilot stage, but the report nonetheless fires a cautionary shot about some of the ethical and practical implications of deploying AI at scale in healthcare.

“We find that there are overarching ethical themes, namely consent, fairness and rights, that cut across the challenges we identify,” the report warns. “We ask how users can give meaningful consent to an AI where there may be an element of autonomy in the algorithm’s decisions, or where we do not fully understand these decisions.”

Trusting the technology

It is perhaps not surprising, therefore, that trust in the value of AI among the general public remains quite low.  Research from the Institute of Electrical and Electronic Engineers (IEEE) found that many of us mistrust AI to deliver safe and reliable outcomes.  What’s more, contrary to popular belief that millennials are the most tech savvy generation (or perhaps because they are), they were the least trusting.

The global survey revealed a particular reticence towards the deployment of AI in healthcare among western countries.  In the UK, for instance, just 31% of people were happy for devices to be used to gather data on their health so that AI could provide monitoring and diagnostic services.  This translated into the paltry 11% who trusted AI to make diagnoses accurately.

As Nada Sanders and John Wood argue in their compelling new book The Humachine, perhaps the way ahead is to not view AI as a means of cutting humans out of the loop, but rather to use AI to augment human decision making.  They argue that the best way forward is to ensure that humans do what humans do best, and technology does what technology does best.

It’s a way forward that seems to chime with the public, as some 75% of respondents in the IEEE study said they would trust a doctor who had used AI to inform and augment their decision, despite most of them not trusting AI when it worked independently.  The majority of the public lacked faith that AI could deliver significant breakthroughs, hence why they still insisted that humans remain in the frame.

Fairness is key

Aside from the effectiveness and reliability of AI tools, it is also likely that society will demand high levels of fairness, especially in insurance-based systems where access to care is fraught with challenges.  Research from the University of Manchester highlights this point, and argues that people are unwilling for the fruits of AI to be concentrated in a small number of Silicon Valley based firms.

This is a crucial battleground, as all of the major tech giants have made plays in the healthcare sector in recent years.  Perhaps the most extensive is the Project Baseline initiative run by Google’s life science division. The project, which is run in conjunction with a range of partners, including Stanford Medicine, collects huge quantities of data from the 10,000 participants, including not only traditional medical record information, but also genomic data and lifestyle data captured from Google’s Study Watch device.

“No one has done this kind of deep dive on so many individuals. This depth has never been attempted,” the team said upon the launch of the project. “It’s to enable generations to come to mine it, to ask questions, without presupposing what the questions are.”

Research suggests that people are quite happy for their medical data to be used in this way, but insist on it purely being used for advancing medical research or the care they receive.  The key was that data was being used for the public good, not for the private good of individual companies.

With data the lifeblood of any AI-based initiative, these issues must be overcome for the true benefits of the technology to bare fruit in the healthcare sector.  The relatively toothless nature of the various ethics committees established by the tech giants suggests that they are not able to self-regulate, much as they no doubt would like to, but equally it remains to be seen if governments are effective arbiters of right and wrong in such a fast moving space.  Until such time as these issues are overcome, however, the true value of AI in healthcare is likely to remain out of reach.