This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter          

Press Releases

Take a look at a selection of our recent media coverage:

Reinforcement learning AI model improves accuracy of skin cancer diagnoses

2nd August 2023

Using a reinforcement learning model that includes human preferences, improves the diagnostic accuracy of artificial intelligence (AI) decision support systems for skin cancer, according to the findings of a recent study.

Published in the journal Nature Medicine, researchers from the Department of Dermatology at MedUni Vienna in Austria integrated human decision-making criteria in the form of ‘reward tables‘ into the AI diagnostic system.

This reinforcement learning – a subset of machine learning – allows the system to learn through trial and error, based on both positive and negative feedback from its actions. In other words, it learns from its mistakes and is designed to mimic natural intelligence as closely as possible.

The dermatologist-generated reward tables incorporated the positive and negative consequences of clinical assessments into the decision-making process, from both the physician‘s and the patient‘s perspective. Consequently, an AI diagnosis was not only rated as right or wrong, but rewarded or penalised with a certain number of plus or minus points depending on the impact of the diagnosis or the resulting decisions.

The researchers found greater accuracy in AI diagnostic results was achieved by incorporating this human decision-making criteria, which was designed to balance the benefits and harms of various diagnostic errors, using melanoma and other skin cancers as an example.

Reinforcement learning and diagnostic accuracy

When compared against supervised learning, the reinforcement learning model improved the sensitivity for melanoma diagnosis from 61.4% to 79.5% and for basal cell carcinoma from 79.4% to 87.1%. AI overconfidence was also reduced while simultaneously maintaining accuracy.

In addition, reinforcement learning increased the rate of correct diagnoses made by dermatologists by 12.0% and improved the rate of optimal management decisions from 57.4% to 65.3%.

Commenting on the importance of the results, study lead Harald Kittler, said: ‘In this way, the AI learned to take into account not only image-based features, but also consequences of misdiagnosis in the assessment of benign and malignant skin manifestations.‘

The improved performance of AI-based skin cancer diagnosis also occurs because reinforcement learning reduces the AI‘s overconfidence in its own predictions, making more nuanced and human-compatible suggestions.

‘This, in turn, helps physicians make more accurate decisions tailored to individual patients in complex medical scenarios,‘ Kittler added.

Review finds AI model diagnostic performance for hip fractures similar to expert clinicians

6th April 2023

A systemic review has found that an AI model provides similar diagnostic ability for hip fractures to that of expert radiologists

In a systemic review and meta-analysis, Canadian researchers found that the performance of an artificial intelligence (AI) model for the diagnosis of hip fractures was comparable with that of expert radiologists and surgeons.

Artificial intelligence (AI) models being increasing used for a range of healthcare applications, although the evidence for a beneficial effect on clinician diagnostic performance is spare. In contrast, models based on deep learning algorithms offer some promise for diagnostic purposes with findings to date suggesting that the diagnostic performance of such systems are equivalent to that of health-care professionals. With hip fractures associated with a huge morbidity and mortality, how useful is an AI model (AIM) for the automatic identification and classification of hip fractures and how does this compare with clinicians were the questions addressed by researchers in the current study.

The team performed a systematic review of the literature and included studies that involved the development of machine learning models for the diagnosis of hip fractures from hip or pelvic radiographs or to predict any postoperative patient outcome following hip fracture surgery. The team examined the diagnostic accuracy of an AIM in comparison to expert clinicians and used the areas under the curve (AUC) for postoperative outcome prediction such as mortality between traditional statistical models and that developed by the machine learning models.

AI model and hip fracture diagnosis

A total of 39 studies were included, of which 46.2% used an AIM to diagnose hip fractures on plain radiographs and 53.8% used an AIM to predict patient outcomes following hip fracture surgery. 

When compared with clinicians, the odds ratio for diagnostic error of the AI models was 0.79 (95% CI, 0.48 – 1.31 p  = 0.36) for hip fracture radiographs. In other words, although the analysis favoured an AIM, statistically, models were no better than clinicians. In addition, the mean sensitivity for the model was 89.3% and the specificity 87.5% and the F1 score (which that assesses the model’s accuracy) was 0.90 (range 0 to 1.0).

For post-operative predictions, e.g., such as mortality, the mean AUC was 0.84 with AI models and 0.79 for alternative controls and therefore not significantly different (p = 0.09).

The authors concluded that while promising for the diagnosis of hip fractures, the performance of AI models was comparable with that of expert radiologists and surgeons, adding that AI outcome prediction appears to provide no substantial benefit over traditional multivariable predictive statistics.

Lex JR et al. Artificial Intelligence for Hip Fracture Detection and Outcome Prediction: A Systematic Review and Meta-analysis. JAMA Netw Open 2023

Daily steps AI model predicts unplanned hospitalisation during chemo-radiation

31st October 2022

A daily steps AI model was able to predict the likelihood that a patient may have an unplanned hospitalisation during chemo-radiation

Using a daily steps AI model, US researchers were able to predict an unplanned hospitalisation for cancer patients undergoing chemo-radiation according to the findings of a study presented at the recent American Society for Radiation Oncology (ASTRO) annual meeting.

Globally, cancer is a leading cause of death and the World Health Organisation has estimated that in 2020, there were nearly 10 million deaths in 2020. While oncologists manage patients with cancer, such individuals may also develop health issues due to treatment-related side-effects that prompts an emergency department (ED) visit. In fact, such unplanned visits are not uncommon and in one study of 402 study participants, 20% experienced an ED visit, and 18% experienced a hospital admission while receiving cancer treatment. The potential consequences of these visits might include interruption of chemotherapy, and this may impact on cancer therapy outcomes. As a result, there is a need for interventions to identify patients at a higher risk of complications and therefore prevent unplanned hospital visits.

The current researchers previously developed a machine learning model which could predict emergency visits and hospitalisation during cancer therapy. Moreover, in a further study, they also showed that a machine learning model, accurately triaged patients undergoing radiotherapy and chemoradiation and was able to direct clinical management, reducing acute care rates in comparison to standard care. With increased use of wearable devices which collect large amounts of health data, the researchers wondered if it would be possible to utilise this data, such as daily step count, to predict unplanned ED visits. The team developed a daily steps AI model and set out to validate the model before and during chemoradiation (CRT). They turned to data collected in three prospective trials in which patients were asked to wear commercial fitness trackers continuously before and during curative-intent CRT for multiple cancer types. The team collated a wealth of data including age, ECOG performance status, sex, diagnosis, radiotherapy plan metrics and daily step count. The model was trained both with and without step count-derived features and used to predict a first hospitalisation event within one week based on data from the preceding two weeks. The models were then evaluated in terms of the area under the receiver operating characteristic curve (AUC).

Daily steps AI model and prediction of hospitalisation

In total, 214 patients with a median age 61 were included and the most common diagnoses were head and neck cancer (30%) and lung cancer (29%). The model was trained using 70% of patients and validated in the remaining 30%.

When step count was included in the model, it had strong a predictive performance for hospitalisation the following week (AUC = 0.81, 95% CI 0.62 – 0.91). In fact, inclusion of step count significantly improved the predictability of the model compared to when this data was excluded (AUC = 0.57, 95% CI 0.40 – 0.74, p = 0.004). The top five contributing variables were step counts from each of the past two days, the absolute difference in minimum step counts over the past two weeks, relative decrease in the maximum step count over the past two weeks, and relative decrease in the step count range over the past two weeks.

In an associated press release, lead author Dr Hong said that ‘The step counts immediately preceding the prediction window ended up being generally more predictive than clinical variables. The dynamic nature of the step counts, the fact that they’re changing every day, seems to make them a particularly good indicator of a patient’s health status.’

The authors concluded that based on these findings, they plan to clinically validate the model in a further study which will randomise patients undergoing CRT for lung cancer to treatment with or without daily step count monitoring.

Friesner I et al. Machine Learning-Based Prediction of Hospitalization Using Daily Step Counts for Patients Undergoing Chemoradiation. No 132. ASTRO annual meeting, 2022