This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter          

Press Releases

Take a look at a selection of our recent media coverage:

Single low-dose CT scan helps predict future lung cancer risk

1st February 2023

Using a single low-dose computed tomography scan and a deep learning model enabled predictions of lung cancer risk over one to six years

US and Taiwanese researchers have shown that the use of a single low-dose computed tomography (CT) scan, together with a deep learning algorithm, allows for a prediction of an individual’s risk of lung cancer over the next six years.

The use of low-dose CT screening has been shown to reduce mortality from lung cancer. Such screening allows for the early detection of the disease and hence the potential for better patient outcomes, although it has been suggested that the current screening guidelines might overlook vulnerable populations with a disproportionate lung cancer burden. Nevertheless, the efficiency of lung cancer screening could be improved by individualising the assessment of future cancer risk. The problem is determining how this can achieved. To date, there are some data to support the use of clinical risk assessment models that incorporate various factors compared to simply using age and cumulative smoking exposure. However, there are enormous possibilities created by greater use of artificial intelligence and deep learning models. In fact, it has become possible to utilise low-dose CT scan results and the presence of pulmonary nodules, into a model and to therefore optimise the screening process. But how useful are other pieces of information gathered from a CT scan beyond the presence of nodules, and could this other information be used by a deep learning model to predict future cancer risk.

This was the aim of the current study in which researchers developed a model, which they termed ‘Sybil’ using the entire volumetric low-dose CT data, without clinical and demographic information, to predict an individual’s future cancer risk. Sybil was able to run in the background of a radiology reading station and did not require annotation by a radiologist. The model was validated using information from three independent screening datasets which included individuals who were non-smokers.

Low-dose CT screening and lung cancer risk prediction

In total, data were retrieved from over 27,000 patients held in three separate databases. Sybil achieved an area under the curve (AUC) of 0.92, 0.86 and 0.94, for the 1-year prediction of lung cancer for each of these datasets. In addition, the concordance indices over 6 years were 0.75, 0.81 and 0.80 for the same three data sets.

The authors concluded that Sybil was able to accurately predict individual’s future risk of lung cancer based on a single low-dose CT scan and called for further studies to better understand Sybil’s clinical application.

Citation
Mikheal PG et al. Sybil: a validated Deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. Clin Oncol 2023

Deep learning-based tool detects pancreatic cancers missed on abdominal CT scan

21st September 2022

A deep learning-based tool is able to detect pancreatic cancer tumours less than 2 cm which are often missed during an abdominal CT scan

A deep learning-based tool has been shown to accurately detect pancreatic cancers that are less than 2 cm and which can often be missed during an abdominal CT scan according to the findings of a retrospective study by Taiwanese researchers.

Pancreatic cancer has a poor prognosis and is the 12th most common cancer worldwide and in 2020 there were more than 495,000 new cases and an estimated 466003 global deaths. However, 5-year survival is poor and data for the UK suggests that only 7.3%)of people diagnosed with the cancer in England survive for five years or more.

The clinical diagnosis of pancreatic cancer is challenging as patients often present with non-specific symptoms with nearly a third of patients clinically misdiagnosed. Imaging has a crucial role to play in diagnosis though one retrospective analysis of different imaging modalities revealed that 62% of cases were missed and 46% misinterpreted, with 42% of cases missed because the tumour was less than 2 cm.

Previous research using a deep learning-based convolutional neural network, showed that the technology could accurately distinguish pancreatic cancer on computed tomography (CT) with acceptable generalisability to images of patients from various races and ethnicities.

However, in that study, segmentation of the pancreas, i.e. identifying that the region on a CT scan which actually is the pancreas, was performed manually by radiologists. But would it be possible for a deep learning-based tool to enable segmentation and to detect the presence of pancreatic cancer?

This was the question addressed in the current study by the Taiwanese team. They used contrast-enhanced CT collected from patients who had been diagnosed with pancreatic cancer and compared these with CT scans of non-cancer, control patients.

The deep learning-based tool was initially trained and validated on samples with and without cancer and then tested in a real-world set of CT scans and its performance assessed based sensitivity, specificity and accuracy.

Deep learning tool and prediction of small pancreatic tumours

A total of 546 patients with a mean age of 65 years (46% female) who had pancreatic cancer with a mean tumour size of 2.9 cm and 733 control patients were used in the training, validation and test set.

In a nationwide test set that included 669 cancer patients and 804 controls, the deep learning-based tool distinguished between CT malignant and control samples with a sensitivity of 89.7% (95% CI 87.1 – 91.9) and a specificity of 92.8% (95% CI 90.8 – 94.5) and an accuracy of 91.4%.

When comparing the tool with radiologists, the corresponding sensitivities for the local test set (109 cancer and 147 control patients) were 90.2% and 96.1% for the tool and radiologists respectively and this difference was not significant (p = 0.11).

The tool had a sensitivity of 87.5% (95% CI 67.6 – 97.3) for a malignancy which was smaller than 2 cm in the local test set although this was slightly lower (74.7%) in the nationwide test set.

The authors concluded that their tool may be of value as a supplement for radiologists to enhance detection of pancreatic cancer although further work was needed to examine the generalisability of the findings of other populations.

Citation
Chen PT et al. Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study Radiology 2022

Fracture detection rates comparable between AI and clinicians

6th April 2022

According to a meta-analysis, the fracture detection performance of artificial intelligence systems and clinicians are broadly equivalent

The fracture detection rates are comparable for artificial intelligence (AI) and clinicians according to the findings of a meta-analysis by researchers from the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Oxford, UK.

Fractures represent a common reason for admission to hospital around the world. However, research suggests that fortunately, fracture rates have stabilised. For example, one 2019 UK-based study observed that the risk of admission for a fracture between 2004 and 2014 was 47.8 per 10,000 population but that the rate of fracture admission remained stable.

Unfortunately, however, fractures are not always detected on first presentation as witnessed by a two-year study in which 1% of all visits resulted in an error in fracture diagnosis and 3.1% of all fractures were not diagnosed at the initial visit.

One solution to improve upon the diagnostic accuracy of fractures is the use of artificial intelligence systems and in particular, machine learning, which enables algorithms to learn from data. Related to machine learning is deep learning, which is a more sophisticated approach to machine learning that uses complex, multi-layered “deep neural networks.

Deep learning systems hold great potential for the detection of fractures and in a 2020 review, the authors concluded that deep learning was reliable in fracture diagnosis and had a high diagnostic accuracy.

For the present meta-analysis, the Oxford team further assessed and compared the diagnostic performance of AI and clinicians on both radiographs and computed tomography (CT) images in fracture detection.

The team searched for studies that developed and or validated a deep learning algorithm for fracture detection and assessed AI vs clinician performance during both internal and external validation. The team analysed receiver operating characteristic curves to determine both sensitivity and specificity.

Fracture detection rates of AI and clinicians

A total of 42 studies with a median number of 1169 participants were included, 37 of which included fractures detected on radiographs and 5 with CT. A total of 16 studies compared the performance of the AI against expert clinicians, 7 to experts and non-experts and one compared AI to non-experts.

When evaluating AI and clinician performance in studies of internal validation, the pooled sensitivity was 92% (95%CI 88 – 94%) for AI and 91% (95% CI 85 – 95%) for clinicians. The pooled specificity values were also broadly similar with a value of 91% of AI and 92% for clinicians.

For studies looking at external validation, the pooled sensitivity for AI was 91% (95% CI 84 – 95%) and 94% (95% CI 90 – 96%) for clinicians on matched sets. The specificity was slightly lower for AI compared to clinicians (91% vs 94%).

The authors concluded that AI and clinicians had comparable reported diagnostic performance in fracture detection and suggested that AI technology has promise as a diagnostic adjunct in future clinical practice.

Citation
Kuo RYL et al. Artificial Intelligence in Fracture Detection: A Systematic Review and Meta-Analysis Radiology 2022

Convolutional neural network diagnosis of ICH equivalent to radiologists

13th December 2021

Convolutional neural network performance appears to be comparable to that of radiologists for the diagnosis of intracranial haemorrhage (ICH)

The use of convolutional neural networks (CNN) for diagnosing patients with an intracranial haemorrhage (ICH) appear to comparable to that of radiologists. This was the conclusion of a study by a team from the Faculty of Health and Medical Sciences, Copenhagen University, Denmark.

An ICH is usually caused by rupture of small penetrating arteries secondary to hypertensive changes or other vascular abnormalities and overall accounts for 10 – 20% of all strokes. However, this proportion varies across the world so that in Asian countries, an ICH is responsible for between 18 and 24% of strokes but only 8 – 15% in Westernised countries. An acute presentation of ICH can be difficult to distinguish from ischaemic stroke and non-contrast computerised tomography (CT) is the most rapid and readily available tool for the diagnosis of ICH.

As in many areas of medicine, artificial intelligence systems are becoming increasingly used and one such system is a Convolutional Neural Network (CNN), which represents a Deep Learning algorithm that is able to take an input image, assign importance to various aspects or objects within in the image and to differentiate one from the other. In fact, a 2019 systematic review of Deep Learning systems concluded that the ‘diagnostic performance of deep learning models to be equivalent to that of health-care professionals.’ Nevertheless, the authors added the caveat that ‘few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample.’

In the present study, the Danish team undertook a systematic review and meta-analysis to appraise the evidence of CNN in per-patient diagnosis of ICH. They performed a literature review and studies deemed suitable for inclusion were those where: patients had undergone non-contrast computed tomography of the cerebrum for the detection of an ICH; radiologists or a clinical report was used as the reference standard and finally where a CNN algorithm was deployed for the detection of ICH. For the purposes of their analysis, the minimum acceptable reference standard was defined as either manual, semi-automated or automated image labelling taken from radiology reports or electronic health records. For their analysis, the researchers calculated the pooled sensitivity, specificity and the receiver operating characteristics curves (SROC).

Findings

A total of six studies with 380,382 scans were included in the final analysis. When comparing the CNN performance to the reference standard, the pooled sensitivity was 96% (95% CI 93 – 97%), pooled specificity 97% (95% CI 90 – 99%) and an SROC of 98% (95% CI 97 – 99%). When combining both retrospective and external validation studies, for CNN, the performance was slightly worse with a pooled sensitivity of 95%, specificity 96% and pooled SROC 98%.

They concluded that CNN-algorithms accurately detect ICHs based on an analysis of both retrospective and external validation studies and that this approach seemed promising but highlighted the need for more studies using external validation test sets with uniform methods to define a more robust reference standard.

Citation

Jorgensen MD et al. Convolutional neural network performance compared to radiologists in detecting intracranial hemorrhage from brain computed tomography: A systematic review and meta-analysis. Eur J Radiol 2021

Deep learning breast MRI distinguishes between benign and malignant cases in women with dense breasts

15th October 2021

A deep learning breast MRI distinguished normal and benign cases in women with dense breasts and might be a useful future triage tool.

The risk of breast cancer is increased among women with more dense breasts and the use of mammography can often miss cases in women with denser breasts. In a 2019 trial it was found that the use of supplementary breast MRI screening in women with dense breasts, lead to the diagnosis of significantly fewer interval cancers than mammography alone. However, screening programmes involve a huge number of women and many breast MRI scans of women with dense breasts show normal anatomical and physiological variation and therefore may not require radiological review.

A team from the Department of Radiology, University of Utrecht, therefore wondered if it was feasible to use an automated deep learning (DL) system for breast MRI screening to triage out normal scans without cancer to reduce the workload of radiologists. The team undertook a secondary analysis of data obtained from the prospective Dense Tissue and Early Breast Neoplasm Screening (DENSE) trial and the DL system was trained on left and right breasts separately and the results combined so that it was able to to differentiate between breasts with and without lesions. The performance of the DL system was assessed using the receiver operating characteristics (ROC) curves.

Findings

A total of 4581 breast MRI examinations of extremely dense breasts from 4581 women with a mean age of 54.3 years were included in the analysis. Of these 9162 breasts, 838 had at least one lesion, of which 77 were malignant. The area under the ROC curves in differentiating between a normal breast MRI and an examination with lesions was 0.83 (95% CI 0.80 – 0.85). The DL system considered that 90.7% (95% CI 86.7 – 94.7) of the MRI examinations with lesions were considered to be non-normal and would therefore be triaged to a radiologist review. In addition, the DL system dismissed 39.7% of the MRI examinations without lesions but did not miss any cases of malignant disease.

Commenting on their findings, the authors recognised a limitation in that their results were from the first round of the DENSE trial and that the number of lesions detected in subsequent screening rounds was smaller. Thus, they planned to further validate the performance of the model on data from subsequent rounds. The authors also suggested that future trials need to focus on demonstrating that the DL system is at least as effective as an expert radiologist at dismissing normal MRI examinations.

Citation

Verburg E et al. Deep Learning for Automated Triaging of 4581 Breast MRI Examinations from the DENSE Trial. Radiology 2021

x