This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Press Releases

Take a look at a selection of our recent media coverage:

Using machine learning for the personalised prognostication of Merkel cell carcinoma

12th February 2025

Machine learning tools can improve the personalised prognostication of aggressive skin cancers, such as Merkel cell carcinoma (MCC), according to a new study.

MCC is the most aggressive form of skin cancer, often presenting in advanced stages. Currently, no personalised prognostication tools exist, and survival rates are poor. As such, artificial intelligence (AI), including machine learning, is being utilised to address and improve the clinical management of skin cancers such as MCC.

In this study, researchers employed a two-step approach, using advanced machine learning techniques, to develop a diagnostic tool called DeepMerkel. The data came from two sources: the SEER database – a large cancer database maintained by the US National Cancer Institute (NCI), and a UK dataset. Together, they involved over 10,000 patients, all of whom had histologically confirmed MCC during the study period. 

Firstly, using explainability techniques, they were able to determine patterns in clinical data such as tumour size, age and immune status. This revealed which factors were most influential on survival in MCC patients.

The second step used combined deep learning feature selection, which meant the machine learnt to automatically select factors that were most important for predicting survival. It was combined with a modified XGBoost framework – an algorithm to structure the data – allowing time-dependent predictions to be made.

The machine learning tool was tested on an international clinical cohort and it made accurate, personalised, time-dependent survival predictions for MCC from readily available clinical information. Retaining accuracy across a wide range of patient groups highlights the broad-reach potential of the new tool.

It outperformed current population-based prognostic staging systems, such as the American Joint Committee on Cancer (AJCC) staging system. Considered the clinical gold standard, the AJCC predicts outcomes using general population data rather than personalised risk factors.

The new machine learning tool also accurately predicted disease-specific survival (DSS) at five years. Additionally, it was able to differentiate time-to-death, providing greater clinical insight and allowing clinicians to personalise treatments more effectively. They could also adjust patient care to improve outcomes, manage symptoms and, where appropriate, make decisions about end-of-life care.

The researchers described MCC and DeepMerkel as ‘the exemplar model’ of personalised machine learning prognostic tools in aggressive skin cancers, highlighting the potential for AI-driven approaches in other areas of oncology.

Reference
Andrew, T et al. A hybrid machine learning approach for the personalized prognostication of aggressive skin cancers. npj Digit. Med. 2025, Jan. 08: DOI: 10.1038/s41746-024-01329-9.

Faster and more accurate stroke care possible via machine learning model for brain scan readings

12th December 2024

A machine learning model can more accurately estimate the age of acute ischemic brain lesions than current methods, with researchers predicting the software could mean up to 50% more stroke patients receive appropriate treatment.

The efficacy and appropriateness of stroke treatment depended on the progression stage or biological age of the lesion and whether it was deemed to be reversible, researchers wrote in the journal NPJ Digital Medicine.

‘Biological age is closely related to chronometric lesion age – i.e. time from symptom onset – although these ages disassociate due to variability in tissue vulnerability and arterial collateral supply,’ they said.

Acute ischemic lesions scanned with non-contrast computerised tomography (NCCT), become progressively hypoattenuated over time, the research team explained, a feature which helped to estimate biological lesion age. 

At present, clinicians measured the relative intensity (RI) of a lesion on NCCT using a method termed Net Water Uptake (NWU).

However, the researchers noted this approach could be confounded by alternative sources of hypointensity, was also insensitive to additional ischemic features and dependant on lesion segmentation.

For this trial, researchers from Imperial College London and University of Edinburgh, UK, and the Technical University of Munich, Germany, developed a convolutional neural network – radiomics (CNN-R) model to optimise lesion age estimation from NCCT.

They noted that machine learning models had several advantages over current methods for stroke assessment such as the ability to screen high-dimensional imaging features for associations with ischemia progression, including those imperceptible to experts, as well as account for lesion anatomy variability and signal heterogeneity.

They trained the CNN-R model on chronometric lesion age, while validating against chronometric and biological lesion age in external datasets of almost 2,000 stroke patients.

Analysis showed the deep-learning model was approximately twice as accurate as NWU for estimating chronometric and biological ages of lesions.

‘The practical importance of our results lies in the CNN-R lesion age biomarker providing more accurate estimates, compared to current methods, of stroke onset time (unknown in ~20% of cases), and lesion reversibility, both currently used for decisions regarding revascularisation treatments,’ the researchers wrote.

As well as validating the method in a large, independent cohort, the researchers said they had demonstrated the technique could be embedded within a central pipeline of automated lesion segmentation and clinically-based expert selection.

Future research should assess whether the higher accuracy of a CNN-R approach to lesion age estimation carries over to predicting lesion reversibility and functional outcomes, they added.

Lead author Dr Adam Marcus, of Imperial College London’s Department of Brain Sciences, estimated up to 50% more stroke patients could be treated appropriately because of this machine learning method.

‘We aim to deploy our software in the NHS, possibly by integrating with existing artificial intelligence-analytic software that is already in use in hospital Trusts,’ he said.

Study senior author Dr Paul Bentley, of Imperial College London’s Department of Brain Sciences and consultant neurologist at Imperial College Healthcare NHS Trust, said the information would help clinicians make emergency decisions about stroke treatment.

‘Not only is our software twice as accurate at time-reading as current best practice, but it can be fully automated once a stroke becomes visible on a scan,’ he said.

The study follows research released last month showing artificial intelligence-enabled electrocardiography (ECG) can accurately predict an individual patient’s risk of future cardiovascular events as well as their short and long-term risk of dying.

Lead author of this study Dr Arunashis Sau, an academic clinical lecturer at Imperial College London’s National Heart and Lung Institute and cardiology registrar at Imperial College Healthcare NHS Trust, said compared with cardiologists the AI model could detect more subtle detail in the ECGs.

How AI and machine learning trends are impacting healthcare

2nd December 2024

Staying abreast of developments in artificial intelligence and machine learning is becoming increasingly important for the delivery of timely, efficient and cutting-edge healthcare, but that can be a challenge. Data science academic Dr Russell Hunter PhD looks at the top trends that healthcare professionals and their organisations need to know about as they navigate this rapidly evolving landscape.

There has long been a widespread interest in how artificial intelligence (AI) and machine learning (ML) could transform the healthcare sector. For example, common searches on Google include questions such as ‘How is machine learning used in healthcare?’ and ‘Does the NHS use machine learning?’.

The interest was taken up a notch recently when the Government committed to a digital-first NHS following critical concerns raised in the Darzi report. Yet, although AI and ML are reshaping everyday practices within healthcare, questions – and perhaps scepticism – remain. And it can be hard for healthcare leaders to address concerns when they are not experts and AI is evolving so fast.

So, what do leaders need to know in terms of emerging trends in AI and ML, and how can those who are suspicious of AI be convinced that it can be a help rather than something to be worried about?

Explainable AI

Explainable AI, also known as XAI, aims to make AI decisions understandable to humans, enhancing trust and regulatory compliance.

When a model is built to solve a particular problem, persuading stakeholders to come on board can often be difficult. In fact, many would prefer a model that is more easily understood, even if it is less optimal. Something that can be visualised is preferable to jumping on board with a mysterious model that works for unknown reasons. This is especially important when it comes to healthcare or finance.

In healthcare, XAI provides explanations for diagnostic decisions or treatment recommendations made by AI systems. These explanations are crucial for doctors and patients to trust and act on AI-driven insights, ultimately improving patient outcomes. AI models used for predicting patient risks, such as the likelihood of developing a certain disease, need to be clear and understandable to ensure that healthcare providers can grasp the underlying factors behind the risk assessment.

Autonomous decision-making

Autonomous systems are transforming healthcare by accelerating the speed and precision of decision-making, driving greater efficiency and enhancing customer experiences. In the business world, ML technologies can increase companies’ ability to quickly analyse vast amounts of data while uncovering patterns and making informed decisions.

Just as automating manual processes can help make sense of business data, advanced systems can be applied to healthcare. Sophisticated multimodal AI can analyse genetic data and patient histories to recommend personalised treatment plans. This leads to more effective and individualised healthcare.

Similarly, by leveraging data from electronic health records, these systems can predict patient outcomes or complications, which allows for proactive intervention.

Agenetic AI

Agenetic AI is a new class of AI designed to act with autonomy. It proactively sets its own goals and takes autonomous steps to achieve them, making decisions and taking action without direct human intervention. This makes it a significant advancement beyond classical reactive AI. 

These proactive systems can enhance patient care and have the potential to alleviate the burden on healthcare professionals by automating routine monitoring and treatment adjustments.

In the realm of personalised healthcare, agentic AI can revolutionise patient care by continuously monitoring patient health metrics and autonomously administering medication as needed. For example, an agentic AI system could monitor the blood sugar levels of a patient with diabetes in real-time and administer insulin precisely when required, thus maintaining optimal glucose levels and reducing the risk of complications.

Agentic AI can also help with personalised treatment plans for chronic diseases by analysing vast amounts of patient data to predict disease progression and suggest tailored treatment plans. For instance, in oncology, agentic AI can process data from medical records, genetic profiles and treatment responses to recommend personalised chemotherapy protocols, potentially improving outcomes and minimising side effects.

Edge AI

Another cutting-edge development is Edge AI, which brings an immediate processing capability crucial for applications in healthcare monitoring where time-sensitive tasks require prompt responses. This is achieved by processing data locally on the device, reducing latency, enabling real-time decision-making and minimising the amount of data that needs to be transmitted to central servers.

Processing sensitive information locally also enhances privacy and security, reducing the risk of data breaches during transmission, which is particularly important with healthcare data.

However, there are challenges. There are hardware limitations and integration complexity, and there is a need for efficient management and maintenance of numerous edge devices. These could curtail the full effectiveness of edge AI.

Augmented workforces

While there are concerns that AI will replace humans in the workplace, the latest AI developments can augment rather than undermine human contributions. For example, AI can assist doctors by analysing medical images and patient data to identify patterns that the human eye might miss. This allows doctors to make more accurate diagnoses and develop personalised treatment plans, thereby improving patient outcomes and operational efficiency.

The collaboration between humans and AI combines the strengths of both, allowing AI to handle repetitive, data-intensive tasks while people focus on strategic, creative and interpersonal activities that require emotional intelligence and critical thinking. This applies to healthcare as much as any other sector.

Rather than eliminating jobs, AI reshapes them. As technology advances, new roles will be created where the job is managing, programming and collaborating with AI systems. It is crucial to keep an eye on developments to ensure healthcare organisations are fully equipped to gain an edge by leveraging AI and ML.

Dr Russell Hunter has a PhD in Computational Neuroscience and works at the University of Cambridge. He leads the course Leveraging Big Data for Business Intelligence at Cambridge Advance Online.

A version of this article was originally published by our sister publication Healthcare Leader.

Parkinson’s disease subtypes revealed using machine learning models

14th August 2023

Use of machine learning has enabled scientists to accurately predict four subtypes of Parkinson’s disease based on images of patient-derived stem cells.

Parkinson’s disease is a neurodegenerative condition that affects both movement and cognition. Symptoms and progression vary based on the underlying disease subtype, although it has not been possible to accurately differentiate between these subtypes.

This may well change in the near future as a team based at the Francis Crick Institute and UCL Queen Square Institute of Neurology, together with the technology company Faculty AI, have shown that machine learning can accurately predict subtypes of Parkinson’s disease using images of patient-derived stem cells.

The work, which was published in the journal Nature Machine intelligence, generated a machine learning-based model that could simultaneously predict the presence of Parkinson’s disease as well as its primary mechanistic subtype in human neurons.

In the study, researchers generated stem cells from a patients’ own cells and chemically created four different subtypes of Parkinson’s disease: two involving pathways leading to toxic build-up of the protein α-synuclein and two involving pathways leading to defunct mitochondria. Together, this created a ’human’ model of the disease.

Next, researchers imaged these disease models and ‘trained’ the machine learning algorithm to recognise each subtype, from which it was then able to predict the particular subtype.

Prediction of Parkinson’s disease subtype

The machine learning model enabled researchers to accurately identify a disease state from a healthy control state.

With quantitative cellular profile-based classifiers, the models were able to achieve an accuracy of 82%. In contrast, image-based deep neural networks could predict control and four distinct disease subtypes with an accuracy of 95%.

The machine learning-trained classifiers were able to achieve a level of accuracy across all subtypes, using the organellar features of the mitochondria, with the additional contribution of the lysosomes, confirming the biological importance of these pathways in Parkinson’s disease.

James Evans, a PhD student at the Francis Crick Institute and UCL, and co-first author with Karishma D’Sa and Gurvir Virdi, said: ‘Now that we use more advanced image techniques, we generate vast quantities of data, much of which is discarded when we manually select a few features of interest.

‘Using AI in this study enabled us to evaluate a larger number of cell features, and assess the importance of these features in discerning disease subtype. Using deep learning, we were able to extract much more information from our images than with conventional image analysis. We now hope to expand this approach to understand how these cellular mechanisms contribute to other subtypes of Parkinson’s.‘

Sonia Gandhi, assistant research director and group leader of the Neurodegeneration Biology Laboratory at the Francis Crick Institute, who was also involved in the study, said: ‘We don’t currently have treatments which make a huge difference in the progression of Parkinson’s disease. Using a model of the patient’s own neurons, and combining this with large numbers of images, we generated an algorithm to classify certain subtypes – a powerful approach that could open the door to identifying disease subtypes in life.

‘Taking this one step further, our platform would allow us to first test drugs in stem cell models, and predict whether a patient’s brain cells would be likely to respond to a drug, before enrolling into clinical trials. The hope is that one day this could lead to fundamental changes in how we deliver personalised medicine.‘

Predicting frequent emergency care use with machine learning no better than existing models

10th February 2023

Prediction of frequent emergency care use by those with chronic conditions with machine learning models is not superior to existing approaches

Predicting frequent emergency care use by patients with chronic health conditions with machine learning models does not offer any additional benefit to existing modelling approaches according to the findings of a retrospective analysis by Canadian researchers.

While, not universally accepted, those with at least three or more visits per year have been used to describe a frequent emergency care user. Such individuals often have complex health needs which are not met through primary care provision and consequently their condition deteriorates, leading to an emergency department (ED) visit. Although frequent emergency care users (FECU) represent only a small proportion of the overall population seen in an ED, they do account for a disproportionately large number of visits. Currently, logistic regression models have been used for analysing frequent users in emergency departments. However, the development of machine learning models that can incorporate large amounts of both clinical and non-clinical data, have the potential to help identify FECU individuals. Such models have already been used, for example, in predicting the need for hospitalisation at the time of triage for children with an asthma exacerbation. Nevertheless, no studies have explored the use of machine learning models – and how these compare with logistic regression – to predict frequent emergency care use in adults with chronic conditions.

In the present study the Canadian team retrospectively examined the performance of four machine learning models in comparison to a logistic regression model, for the prediction of frequent ED use in adults with a range of chronic conditions. They identified two cohorts: those who had at least 3 and 5 visits per year. The models used a number of predictor variables including age, gender and residential area and focused on chronic diseases such as coronary artery disease, mental disorders, epilepsy and chronic non-cancer pain. The models were used to predict frequent ED use as a binary outcome, i.e., frequent user or not and the outcomes were compared in terms of the area under the curve (AUC), sensitivity and specificity.

Frequent emergency care user model predictions

The analysis identified 451,775 ED users and of whom, 9.5% had at least three visits per year and 3% five visits.

The AUC for the logistic regression model for frequent users with 3 visits/year was 0.748, giving a sensitivity of 60% and a specificity of 78%. Two of the machine learning models gave a similar AUC (0.749 and 0.744) whereas the random forest model was much worse (AUC = 0.538). For prediction of frequent users with 5 visits/year, the model performance was broadly similar, i.e., machine learning-based models were no better.

Overall, the authors commented on how none of the machine learning models outperformed the logistic regression model and the most important predictor variable was the number of visits in the previous year. The authors did feel that access to more variables could have helped in refining the predictive accuracy of the machine learning models. Nevertheless, they emphasised the need for future work to consider complex non-linear interactions, since in such cases, machine learning models were likely to be superior to existing ones.

Citation
Chiu YM et al. Machine learning to improve frequent emergency department use prediction: a retrospective cohort study. Sci Rep 2023

Predictive model identifies factors linked to metformin failure in type 2 diabetics

23rd January 2023

A machine learning model using electronic health record data revealed factors associated with metformin treatment failure in type 2 diabetics

A gradient boosting (XGBoost) model, which made use of both clinical and demographic factors, was able predict the most important factors associated with metformin failure in type 2 diabetics according to work by a team of US researchers.

It has been estimated that globally, some 415 million people are currently living with diabetes and the World Health Organization suggests that more than 95% of those with diabetes have type 2 disease. One of the most widely used anti-diabetic agents is metformin and the drug is suggested as a first-line treatment either alone or in combination for those with type 2 disease. Nevertheless, some evidence highlights that monotherapy with metformin is associated with treatment failure. In one study, for instance it was found that, the proportion of patients able to a achieve an HbA1c of below 7% in the first year, ranged from 19 to 86% of those started on metformin. Understanding the factors linked to a respond to metformin can therefore help to personalise medicine and allow for an early adjustment of therapy. However, determining which specific factors are relevant to metformin treatment failure from an examination of a patient’s electronic health record (EHR) is challenging. In an attempt to identify relevant predictors held within the patient’s EHR, in the current study, the US researchers made use of a machine learning model and turned to a patient cohort with at least one abnormal diabetic result (e.g., elevated fasting glucose or HbA1c) that lead to the initiation of metformin treatment. For the purposes of they study, the team defined treatment failure as either an inability to achieve a target HbA1c or < 7% within 18 months of initiation or the addition of other pharmacological agents during the same time frame. Many of the EHR factors were assimilated into the model and included demographics (age, gender, ethnicity), lifestyle factors (smoking status), body mass index as well as laboratory findings such as lipid profiles, blood pressure and liver function tests. The predictive value of the model was assessed using the C-index and individual predictors using Shapley Additive Explanations (SHAP), for which higher values indicated a more important contribution to the model.

Model predictors and metformin failure

The study included 22,047 patients with a mean age of 57 years (48% female) who were started on metformin, Using the target of an HbA1c of below 7%, the overall metformin failure rate was 33% and the median time to failure was 3.9 months.

When the XGBoost model included baseline values of HBA1c, age, gender and ethnicity, it had a high discrimination performance for the risk of metformin failure (C-index = 0.73, 95% CI 0.72 – 0.74). There were a total of 15 different influential factors identified that impacted on metformin failure, the most important of which, was the baseline HBA1c value (SHAP value = 0.76). In contrast, factors such as age (SHAP = 0.016) or body mass index (SHAP = 0.041) were less important. Nonetheless, incorporation of each of these 15 factors did improve the model’s performance (C-index = 0.745, 95% CI 0.73 – 0.75, p < 0.0001).

The authors concluded that although baseline HBA1c was the most important factor in metformin failure, adding other important and readily available variables to the model, improved its performance. They suggested that is was therefore possible to easily identify patients most at risk of metformin failure and who would benefit from closer monitoring and earlier treatment modification.

Citation
Bielinski SJ et al. Predictors of Metformin Failure: Repurposing Electronic Health Record Data to Identify High-Risk Patients. J Clin Endrocrinol Met 2023

Machine learning diffusion tensor imaging models diagnose sleep apnoea

24th October 2022

Machine learning diffusion tensor imaging models have the potential to screen for brain changes associated with the presence of sleep apnoea.

Two machine learning diffusion tensor imaging models were able to successfully distinguish between healthy controls and patients with obstructive sleep apnoea, according to a study by US researchers from California.

Obstructive sleep apnoea (OSA) is an extremely common condition with a 2019 study estimating that globally, 936 million adults aged 30-69 years have mild to severe disease.

OSA leads to a significant impairment in quality of life related to physical functioning, as well as causing daytime sleepiness, decreased learning skills and neuro-cognitive deficits that include impaired episodic memory, executive function, attention and visuospatial cognitive functions.

The gold standard diagnostic test for OSA is overnight polysomnography which requires a dedicated sleep laboratory and trained staff to interpret the results. In contrast, magnetic resonance imaging (MRI) and, in particular, diffusion tensor magnetic resonance imaging, has already been used to reveal how global brain mean diffusivity values are significantly reduced in OSA compared with controls.

The increased use of machine learning models with various imaging modalities, led the US researchers to wonder if a machine learning diffusion tensor imaging model might be able to detect the brain changes associated with OSA. After all, this approach had already been successfully used to identify major depressive disorder.

For the present study, the team focused to two types of machine learning models; a support vector machine (SVM) and a random forest (RF) to assess mean diffusivity maps from brain MRI scans. Both models were trained and compared for their ability to accurately identify OSA and cross-validated within the training dataset.

Machine learning diffusion model’s accuracy for detecting OSA

The researchers recruited 59 patients (mean age 50.2 years, 61% male) with OSA and who had been recently diagnosed through overnight polysomnography with at least moderate disease severity and who were also treatment naive. In addition, 18 OSA and 29 control patients who were not included in the training set and to assess the predictive accuracy of the models.

The cross-validation process showed that the accuracy of the SVM model was 0.73 whereas the RF model was 0.77, i.e. both models showed similar fitting accuracy for OSA and control data. Similarly, the area under the receiver-operator curve was 0.85 for the RF model and 0.84 for the SVM model.

The authors concluded that both the RF and SVM models were comparable for the diagnosis of OSA and suggested that either could be used as a screening tool for OSA in patients where diffusion tensor imaging data was available.

Citation
Pang Bo et al. Machine learning approach for obstructive sleep apnea screening using brain diffusion tensor imaging J Sleep Res 2022.

x