Artificial intelligence proves its worth in healthcare by improving both care processes and patient outcomes. For example, an AI model can provide hospitals with insights into bottlenecks in care pathways or predict when consultations will overrun. An app can then show patients if they will be seen later, preventing long waiting times. When it comes to patient outcomes, AI helps doctors condense information, allowing them to make the right choice for an individual patient. Good examples of this can be found in oncology.
Time-saving
For the creation of the radiation plan, AI can help the doctor in marking the target area, the ‘tumor contouring’. Currently, doctors do this manually, which is reliable, but there is still room for improvement. There is variation in contouring between doctors, and it turns out that doctors make different choices at different times, resulting in minimal differences.
AI can not only reduce these inter- and intra-individual differences, but also save time. Research has shown that this can save about 10 minutes per contouring, as the doctor only needs to check it. As a preparation for the implementation of such support, an innovation project is underway at HagaZiekenhuis, where laborants are responsible for it, with final control by the doctor. AI can also support doctors and patients in making decisions together. For example, terminally ill patients can be informed about their chances of survival, allowing them to weigh the side effects of treatment against the time gained. Maastro is currently conducting a study in which this scenario is being tested in a clinical study with terminally ill lung cancer patients who can undergo brain irradiation: Prophylactic Cranial Irradiation (PCI). Because PCI is a treatment with many (severe) side effects, it is expected that patients and doctors will choose less severe treatments more often due to better insight into survival chances, and in this case, choose not to undergo PCI.
Personalizing decision aids
Another application of AI in clinical practice is the personalization of decision aids. Currently, decision aids mostly consist of texts, videos, and animations that explain a disease and its corresponding treatment options in a simple way. They also often contain a list of questions, clarifying the impact of the different aspects of a treatment. For example, one patient may be single and can be hospitalized, while another cannot due to care responsibilities.
Some decision aids also indicate the survival rates of different treatment options based on clinical studies or national survival rates. However, this information is often not personalized to the individual patient’s characteristics, such as age, medical history, or lifestyle. AI can be used to personalize decision aids, taking into account individual patient characteristics, preferences, and values.
Implementing AI in a decision-making tool provides patients with the opportunity to receive information specific to their situation. In collaboration with Zuyd University of Applied Sciences, Maastro is currently developing a personalized decision-making tool for prostate cancer patients. This tool will provide prostate cancer patients with personalized information on relevant treatment options. For example, a patient who already experiences incontinence probably has a higher chance of it worsening due to treatment than a patient who does not.
Others are exploring the use of AI in the ICU. Here, the technology is being used to predict which patients no longer need to remain in the ICU and which patients have a high risk of deteriorating. The AI solution can predict how a patient will respond to treatment, allowing doctors to adjust treatment trajectories to the most effective treatment for an individual patient while minimizing side effects.
Slow adoption
Despite the many promising initiatives mentioned above, the uptake of AI in clinical practice is painfully slow. There are currently about 100,000 prediction models with applications in healthcare available online, but only a small portion of these models have been clinically validated. Yet only a handful of these solutions are actually being used. Our experience is that many factors present obstacles to the widespread adoption of AI. Some of these include:
A black box. But is that a problem?
AI as a “black box” is a frequently mentioned issue because an algorithm learns how to arrive at a solution on its own. Some algorithms do this in an understandable way, such as decision trees, where a series of consecutive yes/no decisions are made. However, many research questions will require more complex algorithms that are not easy to understand.
People sometimes have difficulty with this, and that is understandable because how can you judge if something works well if you don’t understand how it works? However, it is often not feasible to make both an accurate and understandable prediction. We usually do not know how medication works exactly, or how our cars, planes, phones, and computers work, but we still cannot do without them. Depending on the application area, this will have consequences.
Dealing with technology in the clinic
The field of radiation therapy is one that has emerged from technology. As a result, the link between the clinic and technology is constantly visible in this field. The introduction of any new technology brings uncertainties, even if the scientific basis is available and the technology is already in use elsewhere.
We address this by performing a “dry run.” This provides the opportunity to critically evaluate the work process: significant corrections are quite normal in this phase. During clinical introduction, initial human finetuning and caution will prevail. Later, as experience with the new technology grows, more reliance can be placed on the protocol, with increased efficiency. This is also how we could approach artificial intelligence.
Is it applicable to my patients?
Another frequently mentioned problem is the applicability of AI models to patients in one’s own hospital. Many AI models can be published, but who says they will work in another city, region, or country?
Many factors can affect the effectiveness of a prediction model. An AI model from Asia, for example, is unlikely to work in a European hospital due to different environmental factors. Or one hospital may see patients with a more severe form of a disease, making a prediction model based on this population less effective in a population with a less severe form of the disease.
To eliminate these differences, we use external validation when developing prediction models. This involves testing a prediction model on a group of patients that the model has not yet “seen.” When external validation is successful, preferably in multiple hospitals, it provides greater certainty. Despite this certainty, it is still recommended to test a model first in the hospital that wants to use it.
Lowering the threshold
An important, often overlooked aspect is the integration of AI in healthcare pathways. AI models are often found on difficult-to-access, user-unfriendly websites, and in some cases, even in an Excel document on a physician’s laptop. This does not promote the use of AI.
The threshold for using AI should be low. Not only should the models be safe and easily accessible, but the healthcare information should also be immediately usable to test the models. There is still much to be gained if we want to evaluate the models’ performance in our clinic.
We must also not lose sight of the patient. The ultimate goal is, of course, to discuss the outcomes of AI models with the patient concerned. Therefore, much attention must also be paid to visualizing the results. These must be understandable for every patient, regardless of their educational level or background.
New challenges
We see daily messages in which the following questions are formulated: what happens when a physician rejects the advice of a model but is ultimately proven wrong? Or when an AI model makes an incorrect prediction?
If we ‘flatten’ this, it falls under the heading: deviating from protocol, assessment based on empathy and experience. For anyone who ends up in medical practice, it quickly becomes clear that healthcare cannot be captured in a mathematical formula or model. Can one say with some certainty that a model is applicable to a broader population? Our knowledge is a good starting point that we should stick to as much proven care as possible.
Then, based on extensive experience, adjustment and guidance are necessary. The human team of healthcare providers and patients has the compass to take control of the technology. This leads to trust and continuous improvement. It is essential to realize that this is also the moral compass because decisions are made there and consequences are felt there. That is healthcare. That is why healthcare has always been so varied and endlessly fascinating. Now that new technology is emerging, this only increases. Healthcare providers, together with the team and the patient, act as a helmsman, supported by more and more evidence. How wonderful is it to see that artificial intelligence can help us generate more evidence?
More attention for AI
To promote the use of AI, the Dutch AI Coalition was formed in October 2019. Work has also been underway for several years at the European level to provide thoughtful advice to support the implementation of AI.
More attention to artificial intelligence will certainly give rise to all kinds of questions this year. For example, in the areas of ease of use, understandability of models, the availability of valuable reliable curated data, and safeguarding the patient’s right to consent to the use of health data (GDPR or AVG). Agreements will also come closer on sustainable, secure data storage with a view to future developments and the associated health benefits for the individual and society when it can be used.
We will be increasingly confronted with the question of how we should organize our healthcare landscape technically if we want to make optimal use of technology. That is why it is welcome that AI receives a lot of attention because healthcare must be organized not only humanely but also technically smoothly and safely around the patient.