The UK health system has been under pressure for a long time. A&E (ER) departments come close to reaching breaking point each winter. A new five-year NHS plan acknowledged that waiting times will have to rise in certain areas to reduce burden in others. The solution, which many people are hopeful about, is to look to technology to help ease the strain. This will come from both the healthcare professional operating the NHS and the public who are accessing care. To investigate this as part of our monthly Future Health Projects, an insight driven discussion into the communications role in the future of health, we asked the question:
“What are the barriers to adoption of a fully automated health system among the general population?”
We started broadly by looking at common experiences of automated interaction with the public; the most commonly recognised being chat bots. Our discussion group predominantly associated chat bots with the service industry. Dissecting what people are using chat bots for, they want quick solutions and quick answers, such as changing delivery times on groceries and obtaining a refund. They had specific reasons for interacting with chat bots, which had a low priority/risk outcome and were making quick judgements about the probability of machine vs. human error and deciding accordingly. As soon as the complexity of the request increased, people were quick to look for a human interaction to validate the decision.
How does this apply to health?
As the academic research suggests, when looking for health information, the vast majority of our discussion group used online resources prior to booking a GP appointment. While none of our group had knowingly interacted with automated health systems, the NHS 111 service was discussed due to the algorithms used to direct communication between the caller and centre staff.
In regards to seeking diagnosis, an interesting finding emerged. The two broad reasons given for using NHS 111 and seeking out health information online were:
For NHS 111, our group had already decided that seeing the doctor was justified. They just needed NHS 111’s validation with the ubiquitous safety message of ‘If you are worried, see your GP”. The justifying inaction group were much more likely to use web resources to validate not taking action, often searching through multiple resources to find the solution that suited them.
It is therefore unsurprising that far from reducing burden, since its introduction NHS 111 has led to a rise in GP visits. The suggestion would drastically alter the dynamic for which NHS 111 is used and how it should be delivering information to people accessing its service.
Returning to the use of chat bots, we stated people make quick judgements about the probability of machines vs. human error and decide on a course of action accordingly. However, we are rapidly approaching a time when AI and machine learning technologies will be able to diagnose certain diseases with a lower level of error than their human counterparts are able to. Posing the question, “would people be happy to receive diagnosis from a machine?” The answer was a resounding “no”.
That brings us to the final part of the puzzle – a lack of belief that a software can understand the nuances of illness and disease. Perhaps we could rephrase this as an inability of patients to share the right information to receive a confident answer? Whichever way you look at it, people were aware of the limitations of a machine to get the right answer and the ability of a doctor to notice things that they were not able to elucidate. Secondly, the ability of a machine learning platform to reassure was thought to be missing. One person summed up why they went to the GP as, “The best cure for anything is to book a doctor’s appointment,” indicating that the physical act of seeing the doctor was almost as important as the ailment solution.
Validation is the premise that seems to be key to success. Not in a scientifically significant manner, but in the hearts and minds of the end users. What the conversation established is the lack of ability for machines to validate a person’s emotional insecurities and inspire confidence. Does that identify a serious communications flaw – that patient facing technology is designed with an end goal in mind, typically diagnosis? It does not sufficiently address the reasons people seek health information. Technology will eventually result in a reduction in health system burden – it has to for the sake of our health systems, but a solution which integrates communication of the information in a way people want and are confident about will hasten the acceptance of this technology.