This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter       

Press Releases

Take a look at a selection of our recent media coverage:

The transformative use of AI in radiotherapy and beyond: insights from Professor Raj Jena

24th February 2025

As the UK’s first clinical professor of AI in radiotherapy, Professor Raj Jena talks to Helen Quinn about the impact of artificial intelligence and deep learning tools on radiotherapy and patient care, the opportunities and challenges he’s hoping to tackle in his new role, and the broader potential of AI in healthcare.

Put simply, artificial Intelligence (AI) has the potential to transform healthcare. Effective use of emerging machine learning techniques can improve patient care, complement clinicians’ work and address a range of challenges. If used well and in the proper context, AI can enhance diagnostic processes, personalise treatment plans and efficiently manage healthcare data, all while freeing up clinicians’ time to focus on the direct human aspect of healthcare.

Within this rapidly evolving technology landscape, the University of Cambridge has appointed the UK’s first clinical professor of AI in radiotherapy, signalling a need for, and a commitment to, utilising AI in the fight against cancer. Taking up this novel role is Professor Raj Jena, who is also a research scientist and consultant oncologist at Cambridge University Hospitals NHS Foundation Trust.

Professor Jena specialises in using advanced imaging techniques to improve outcomes for patients with central nervous system tumours. Through his research, he has helped to develop an AI tool called Osairis, which can enhance and accelerate tumour analysis.

Machine learning for radiotherapy is now routinely used throughout Cambridge University Hospitals NHS Foundation Trust. It has reduced the waiting time for patients between referral and commencing curative radiotherapy treatment, which can, in turn, improve survival rates in some patients.

Aligning AI research and clinical practice

The new AI clinical professorship reflects the progress in balancing clinical practice and domain expertise in radiotherapy whilst maintaining and leading an academic group delivering high-quality research, says Professor Jena.

‘We’re trying to link the latest and greatest thinking in data science, machine learning and AI to what we do in the clinic,’ he explains. ‘Most people think an oncology consultant who’s active in research would either be working in a wet lab or work in the area of clinical trials. So, it’s quite nice to identify the fact that there is another way an academic oncologist can contribute to research.’

In fact, over the past 20 years, Professor Jena has concentrated on using mathematics and computation to analyse medical images – something he says has been recognised in the new clinical professor of AI in radiotherapy role. ‘I’ve been interested in computational approaches for years, but nowadays it’s reached the mainstream, and it’s called AI. It’s great because we can ride the wave of interest in AI,’ he says.

Using AI in radiotherapy

The use of AI in medical imaging involves applying a deep learning model to perform clinically valuable tasks. This is particularly applicable to analysing radiotherapy images, making it a highly effective technique in this field.

‘If you look at clinically useful applications of AI across the whole of medicine, the reality is that we’re still at the start of that story. But in radiation therapy, we happen to have a problem that lends itself to a solution in deep learning,’ Professor Jena explains. ‘We’ve gone quite quickly, from these approaches being just research to actually being plumbed into the clinic and helping patients get started on potentially life-saving radiotherapy more quickly.’

The development of the Osairis tool stemmed from a chance meeting between Professor Jena and Dr Antonio Criminisi PhD, a machine learning engineer and the head of Microsoft’s AI research programme in the UK.

Dr Criminisi taught computers to analyse the movement of the human body from the outside, recognising specific positions so a person’s body could be used effectively as a controller, for example, in sports-related video games. Professor Jena was curious whether this approach could be applied inside the body, too, and invited Dr Criminisi to his hospital department to observe radiation oncologists marking up scans of patients waiting to start radiotherapy treatment.

The outcome was the development of an open-source deep learning tool for automatic segmentation of radiotherapy images and the first AI technology to be developed and deployed within the NHS.

‘It was a very prescient point, we could then take the tooling and actually build our own machine learning models from our own patients’ data, test them out, and then for the first time, within the hospital, build a medical device,’ Professor Jena explains.

Cambridge University Hospitals Trust invested in cloud computing across its sites, allowing Professor Jena’s team to implement deep learning tools throughout the Trust. Now, when a patient with a head, neck or brain tumour comes for a scan, the scan data is anonymised, encrypted and sent off for analysis using the AI technology. It has been found to accelerate clinicians’ radiotherapy planning by approximately two and a half times.

‘What the algorithm does is to mark out every healthy structure we need to be aware of when planning radiotherapy treatment. And that means that the oncologist can be much faster in creating a safe radiotherapy plan,’ Professor Jena says. ‘Something that used to take maybe an hour and 40 minutes can be done in half an hour, so you can see patients faster and free up clinicians and patients get started on radiotherapy more quickly, too.’

The challenges

Despite the myriad ways in which AI can support healthcare systems, challenges remain. Many machine learning models are built based on available data rather than in response to a particular patient need, and the data required to build specific models can be difficult to obtain.

Professor Jena says turning available data into necessary data requires considerable effort. He hopes his new professorship, which straddles both the research and clinical environments, will help him achieve this as he builds AI tooling to address specific patient needs and avoid bias in the system. 

AI technology is also moving rapidly, and the journey has sometimes involved missteps, including breaches of data usage and sharing of data with industry. Professor Jena warns that robust governance needs to be in place to prevent further issues, particularly as AI models begin incorporating more sensitive data, such as genomic information.

‘I think we have to take those things and learn how to do it better. The biggest thing we can do is make examples where we do this right, that are highly shareable and highly applicable,’ Professor Jena says.

Enhancing future healthcare

Radiotherapy is an exemplar of the successful use of AI in healthcare, but Professor Jena hopes there will be a cross-fertilisation of technology, enabling AI to evolve and excel at interpreting non-image data as well. He believes AI can ‘make real inroads’ in diagnostics for the early detection of cancer. Early works suggest it could be used in tests that can look for cancer in urine, blood or even exhaled breath, for example.

The tools could also play a role in personalising treatments for cancer patients since AI can look for patterns and simplify very high-dimensional data. In complicated cancers such as brain tumours, where several medications might be marginally effective, an AI model could examine that information, align it with changes in the patient’s tumour and suggest a personalised medication plan.

‘I think this is where we really want to push,’ says Professor Jena. ‘Personalised medicine is very interesting to us because we now get so much information when a cancer is diagnosed, including genomics, which can highlight mutations and indicate a patient may benefit from some kind of targeted drug. I think the paradigm changes around AI in medicine will come within the areas of precision medicine or drug discovery.’

Ultimately, Professor Jena says that AI will complement and enhance much of what clinicians already do, freeing them from time-consuming, data-heavy tasks.

‘As you build these workflow acceleration tools, all staff will move towards a situation where they’re spending more time either listening to patients directly or making decisions. I think that will make a huge difference,’ he says. As well as awaiting the paradigm shift in AI, I’m a great believer in bringing together multiple AI tools where each one saves time or increases safety. Adding up all of these small increments can still make a huge impact on the delivery of human-centric care in the clinic.’

Single low-dose CT scan helps predict future lung cancer risk

1st February 2023

Using a single low-dose computed tomography scan and a deep learning model enabled predictions of lung cancer risk over one to six years.

US and Taiwanese researchers have shown that the use of a single low-dose computed tomography (CT) scan, together with a deep learning algorithm, allows for a prediction of an individual’s risk of lung cancer over the next six years.

The use of low-dose CT screening has been shown to reduce mortality from lung cancer. Such screening allows for the early detection of the disease and hence the potential for better patient outcomes, although it has been suggested that the current screening guidelines might overlook vulnerable populations with a disproportionate lung cancer burden.

Nevertheless, the efficiency of lung cancer screening could be improved by individualising the assessment of future cancer risk. The problem is determining how this can achieved. To date, there are some data to support the use of clinical risk assessment models that incorporate various factors compared to simply using age and cumulative smoking exposure.

However, there are enormous possibilities created by greater use of artificial intelligence and deep learning models. In fact, it has become possible to utilise low-dose CT scan results and the presence of pulmonary nodules, into a model and to therefore optimise the screening process. But how useful are other pieces of information gathered from a CT scan beyond the presence of nodules, and could this other information be used by a deep learning model to predict future cancer risk?

This was the aim of the current study in which researchers developed a model, which they termed ‘Sybil’ using the entire volumetric low-dose CT data, without clinical and demographic information, to predict an individual’s future cancer risk.

Sybil was able to run in the background of a radiology reading station and did not require annotation by a radiologist. The model was validated using information from three independent screening datasets which included individuals who were non-smokers.

Low-dose CT screening and lung cancer risk prediction

In total, data were retrieved from over 27,000 patients held in three separate databases. Sybil achieved an area under the curve (AUC) of 0.92, 0.86 and 0.94, for the one-year prediction of lung cancer for each of these datasets. In addition, the concordance indices over six years were 0.75, 0.81 and 0.80 for the same three data sets.

The authors concluded that Sybil was able to accurately predict individual’s future risk of lung cancer based on a single low-dose CT scan and called for further studies to better understand Sybil’s clinical application.

Citation
Mikheal PG et al. Sybil: a validated Deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. Clin Oncol 2023.

Deep learning-based tool detects pancreatic cancers missed on abdominal CT scan

21st September 2022

A deep learning-based tool is able to detect pancreatic cancer tumours less than 2 cm which are often missed during an abdominal CT scan

A deep learning-based tool has been shown to accurately detect pancreatic cancers that are less than 2 cm and which can often be missed during an abdominal CT scan according to the findings of a retrospective study by Taiwanese researchers.

Pancreatic cancer has a poor prognosis and is the 12th most common cancer worldwide and in 2020 there were more than 495,000 new cases and an estimated 466003 global deaths. However, 5-year survival is poor and data for the UK suggests that only 7.3%)of people diagnosed with the cancer in England survive for five years or more.

The clinical diagnosis of pancreatic cancer is challenging as patients often present with non-specific symptoms with nearly a third of patients clinically misdiagnosed. Imaging has a crucial role to play in diagnosis though one retrospective analysis of different imaging modalities revealed that 62% of cases were missed and 46% misinterpreted, with 42% of cases missed because the tumour was less than 2 cm.

Previous research using a deep learning-based convolutional neural network, showed that the technology could accurately distinguish pancreatic cancer on computed tomography (CT) with acceptable generalisability to images of patients from various races and ethnicities.

However, in that study, segmentation of the pancreas, i.e. identifying that the region on a CT scan which actually is the pancreas, was performed manually by radiologists. But would it be possible for a deep learning-based tool to enable segmentation and to detect the presence of pancreatic cancer?

This was the question addressed in the current study by the Taiwanese team. They used contrast-enhanced CT collected from patients who had been diagnosed with pancreatic cancer and compared these with CT scans of non-cancer, control patients.

The deep learning-based tool was initially trained and validated on samples with and without cancer and then tested in a real-world set of CT scans and its performance assessed based sensitivity, specificity and accuracy.

Deep learning tool and prediction of small pancreatic tumours

A total of 546 patients with a mean age of 65 years (46% female) who had pancreatic cancer with a mean tumour size of 2.9 cm and 733 control patients were used in the training, validation and test set.

In a nationwide test set that included 669 cancer patients and 804 controls, the deep learning-based tool distinguished between CT malignant and control samples with a sensitivity of 89.7% (95% CI 87.1 – 91.9) and a specificity of 92.8% (95% CI 90.8 – 94.5) and an accuracy of 91.4%.

When comparing the tool with radiologists, the corresponding sensitivities for the local test set (109 cancer and 147 control patients) were 90.2% and 96.1% for the tool and radiologists respectively and this difference was not significant (p = 0.11).

The tool had a sensitivity of 87.5% (95% CI 67.6 – 97.3) for a malignancy which was smaller than 2 cm in the local test set although this was slightly lower (74.7%) in the nationwide test set.

The authors concluded that their tool may be of value as a supplement for radiologists to enhance detection of pancreatic cancer although further work was needed to examine the generalisability of the findings of other populations.

Citation
Chen PT et al. Pancreatic Cancer Detection on CT Scans with Deep Learning: A Nationwide Population-based Study Radiology 2022

Fracture detection rates comparable between AI and clinicians

6th April 2022

According to a meta-analysis, the fracture detection performance of artificial intelligence systems and clinicians are broadly equivalent

The fracture detection rates are comparable for artificial intelligence (AI) and clinicians according to the findings of a meta-analysis by researchers from the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, Oxford, UK.

Fractures represent a common reason for admission to hospital around the world. However, research suggests that fortunately, fracture rates have stabilised. For example, one 2019 UK-based study observed that the risk of admission for a fracture between 2004 and 2014 was 47.8 per 10,000 population but that the rate of fracture admission remained stable.

Unfortunately, however, fractures are not always detected on first presentation as witnessed by a two-year study in which 1% of all visits resulted in an error in fracture diagnosis and 3.1% of all fractures were not diagnosed at the initial visit.

One solution to improve upon the diagnostic accuracy of fractures is the use of artificial intelligence systems and in particular, machine learning, which enables algorithms to learn from data. Related to machine learning is deep learning, which is a more sophisticated approach to machine learning that uses complex, multi-layered “deep neural networks.

Deep learning systems hold great potential for the detection of fractures and in a 2020 review, the authors concluded that deep learning was reliable in fracture diagnosis and had a high diagnostic accuracy.

For the present meta-analysis, the Oxford team further assessed and compared the diagnostic performance of AI and clinicians on both radiographs and computed tomography (CT) images in fracture detection.

The team searched for studies that developed and or validated a deep learning algorithm for fracture detection and assessed AI vs clinician performance during both internal and external validation. The team analysed receiver operating characteristic curves to determine both sensitivity and specificity.

Fracture detection rates of AI and clinicians

A total of 42 studies with a median number of 1169 participants were included, 37 of which included fractures detected on radiographs and 5 with CT. A total of 16 studies compared the performance of the AI against expert clinicians, 7 to experts and non-experts and one compared AI to non-experts.

When evaluating AI and clinician performance in studies of internal validation, the pooled sensitivity was 92% (95%CI 88 – 94%) for AI and 91% (95% CI 85 – 95%) for clinicians. The pooled specificity values were also broadly similar with a value of 91% of AI and 92% for clinicians.

For studies looking at external validation, the pooled sensitivity for AI was 91% (95% CI 84 – 95%) and 94% (95% CI 90 – 96%) for clinicians on matched sets. The specificity was slightly lower for AI compared to clinicians (91% vs 94%).

The authors concluded that AI and clinicians had comparable reported diagnostic performance in fracture detection and suggested that AI technology has promise as a diagnostic adjunct in future clinical practice.

Citation
Kuo RYL et al. Artificial Intelligence in Fracture Detection: A Systematic Review and Meta-Analysis Radiology 2022

Convolutional neural network diagnosis of ICH equivalent to radiologists

13th December 2021

Convolutional neural network performance appears to be comparable to that of radiologists for the diagnosis of intracranial haemorrhage (ICH)

The use of convolutional neural networks (CNN) for diagnosing patients with an intracranial haemorrhage (ICH) appear to comparable to that of radiologists. This was the conclusion of a study by a team from the Faculty of Health and Medical Sciences, Copenhagen University, Denmark.

An ICH is usually caused by rupture of small penetrating arteries secondary to hypertensive changes or other vascular abnormalities and overall accounts for 10 – 20% of all strokes. However, this proportion varies across the world so that in Asian countries, an ICH is responsible for between 18 and 24% of strokes but only 8 – 15% in Westernised countries. An acute presentation of ICH can be difficult to distinguish from ischaemic stroke and non-contrast computerised tomography (CT) is the most rapid and readily available tool for the diagnosis of ICH.

As in many areas of medicine, artificial intelligence systems are becoming increasingly used and one such system is a Convolutional Neural Network (CNN), which represents a Deep Learning algorithm that is able to take an input image, assign importance to various aspects or objects within in the image and to differentiate one from the other. In fact, a 2019 systematic review of Deep Learning systems concluded that the ‘diagnostic performance of deep learning models to be equivalent to that of health-care professionals.’ Nevertheless, the authors added the caveat that ‘few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample.’

In the present study, the Danish team undertook a systematic review and meta-analysis to appraise the evidence of CNN in per-patient diagnosis of ICH. They performed a literature review and studies deemed suitable for inclusion were those where: patients had undergone non-contrast computed tomography of the cerebrum for the detection of an ICH; radiologists or a clinical report was used as the reference standard and finally where a CNN algorithm was deployed for the detection of ICH. For the purposes of their analysis, the minimum acceptable reference standard was defined as either manual, semi-automated or automated image labelling taken from radiology reports or electronic health records. For their analysis, the researchers calculated the pooled sensitivity, specificity and the receiver operating characteristics curves (SROC).

Findings

A total of six studies with 380,382 scans were included in the final analysis. When comparing the CNN performance to the reference standard, the pooled sensitivity was 96% (95% CI 93 – 97%), pooled specificity 97% (95% CI 90 – 99%) and an SROC of 98% (95% CI 97 – 99%). When combining both retrospective and external validation studies, for CNN, the performance was slightly worse with a pooled sensitivity of 95%, specificity 96% and pooled SROC 98%.

They concluded that CNN-algorithms accurately detect ICHs based on an analysis of both retrospective and external validation studies and that this approach seemed promising but highlighted the need for more studies using external validation test sets with uniform methods to define a more robust reference standard.

Citation

Jorgensen MD et al. Convolutional neural network performance compared to radiologists in detecting intracranial hemorrhage from brain computed tomography: A systematic review and meta-analysis. Eur J Radiol 2021

Deep learning breast MRI distinguishes between benign and malignant cases in women with dense breasts

15th October 2021

A deep learning breast MRI distinguished normal and benign cases in women with dense breasts and might be a useful future triage tool.

The risk of breast cancer is increased among women with more dense breasts and the use of mammography can often miss cases in women with denser breasts. In a 2019 trial it was found that the use of supplementary breast MRI screening in women with dense breasts, lead to the diagnosis of significantly fewer interval cancers than mammography alone. However, screening programmes involve a huge number of women and many breast MRI scans of women with dense breasts show normal anatomical and physiological variation and therefore may not require radiological review.

A team from the Department of Radiology, University of Utrecht, therefore wondered if it was feasible to use an automated deep learning (DL) system for breast MRI screening to triage out normal scans without cancer to reduce the workload of radiologists. The team undertook a secondary analysis of data obtained from the prospective Dense Tissue and Early Breast Neoplasm Screening (DENSE) trial and the DL system was trained on left and right breasts separately and the results combined so that it was able to to differentiate between breasts with and without lesions. The performance of the DL system was assessed using the receiver operating characteristics (ROC) curves.

Findings

A total of 4581 breast MRI examinations of extremely dense breasts from 4581 women with a mean age of 54.3 years were included in the analysis. Of these 9162 breasts, 838 had at least one lesion, of which 77 were malignant. The area under the ROC curves in differentiating between a normal breast MRI and an examination with lesions was 0.83 (95% CI 0.80 – 0.85). The DL system considered that 90.7% (95% CI 86.7 – 94.7) of the MRI examinations with lesions were considered to be non-normal and would therefore be triaged to a radiologist review. In addition, the DL system dismissed 39.7% of the MRI examinations without lesions but did not miss any cases of malignant disease.

Commenting on their findings, the authors recognised a limitation in that their results were from the first round of the DENSE trial and that the number of lesions detected in subsequent screening rounds was smaller. Thus, they planned to further validate the performance of the model on data from subsequent rounds. The authors also suggested that future trials need to focus on demonstrating that the DL system is at least as effective as an expert radiologist at dismissing normal MRI examinations.

Citation

Verburg E et al. Deep Learning for Automated Triaging of 4581 Breast MRI Examinations from the DENSE Trial. Radiology 2021

x