This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Press Releases

Take a look at a selection of our recent media coverage:

Radiographer diagnostic performance equal to radiologists for screening mammograms

23rd September 2022

Radiographer diagnostic performance for screening mammograms was similar to radiologists offering a solution to the shortage of radiologists

Radiographer diagnostic performance for screening mammography is no different to radiologists for double reading digital mammograms and therefore offers a potential solution to the shortages of radiologists according to the results of a retrospective study by UK researchers.

Screening mammography is widely used in the detection of breast cancer and has been proven to decrease mortality. Moreover, the rate of cancer detection can be further increased by double reading of scans. For example, one study revealed how the relative increase in cancer detection as a result of a second reviewer was 6.3%.

In a 2016 survey, it was found that UK radiographers are already involved with interpreting and reporting images across the full spectrum of clinical indications for mammography including: low-risk population screening, symptomatic, annual surveillance, family history and biopsy/surgical cases. However, despite this change in role, there is limited evidence on the real-life radiographer diagnostic performance in double reading mammograms.

For the present study, the UK team examined the performance of radiographers and radiologists for all screening mammograms in England between 2015 and 2016. The researchers used three key metrics for comparison between radiographers and radiologists: the cancer detection rate (CDR); recall rate (RR) and positive predictive value (PPV) of recall on the basis of biopsy-proven pathological findings for the first readers. Each of the breast scans were analysed based on the reader profession (i.e., radiologist or radiographer) and years of experience.

Radiographer diagnostic performance on screening mammography

A total of 401 readers were included and double read the mammograms of 1,404,395 women. There were 224 radiologists who first read 763,958 mammograms and 177 radiographers who first read 640,437 mammograms.

The overall mean CDR was 7.7 per 1000 examinations and the mean radiographer diagnostic performance was 7.53/1000 examinations and 7.84 for radiologists (p = 0.08). When the researchers analysed CDR’s based on years of experience, there was no variation for either profession (p = 0.87).

The overall recall rate was 5% and again there was no significant difference between radiographers and radiologists (5.2% vs 5%, radiographers vs radiologists, p = 0.63) though the RR was lower for those with more years of experience.

Finally, the overall PPV was 16.7% and again differences between radiographers and radiologists were not significant (16.1% vs 17.1%, radiographers vs radiologists, p = 0.42). As with the RR, PPV improved with more years of experience.

The authors concluded that there were no clear differences in radiographer diagnostic performance and radiologists as readers of screening digital mammograms. They speculated that the use of trained radiographers in the double-reading workflow may offer a potential solution to the shortage of radiologists but suggested that more studies were needed to determine if such physician extender roles can and should, be used to read screening mammograms independent of the radiologist.

Chen Y et al. Performance of Radiologists and Radiographers in Double Reading Mammograms: The UK National Health Service Breast Screening Program Radiology 2022

Convolutional neural network diagnosis of ICH equivalent to radiologists

13th December 2021

Convolutional neural network performance appears to be comparable to that of radiologists for the diagnosis of intracranial haemorrhage (ICH)

The use of convolutional neural networks (CNN) for diagnosing patients with an intracranial haemorrhage (ICH) appear to comparable to that of radiologists. This was the conclusion of a study by a team from the Faculty of Health and Medical Sciences, Copenhagen University, Denmark.

An ICH is usually caused by rupture of small penetrating arteries secondary to hypertensive changes or other vascular abnormalities and overall accounts for 10 – 20% of all strokes. However, this proportion varies across the world so that in Asian countries, an ICH is responsible for between 18 and 24% of strokes but only 8 – 15% in Westernised countries. An acute presentation of ICH can be difficult to distinguish from ischaemic stroke and non-contrast computerised tomography (CT) is the most rapid and readily available tool for the diagnosis of ICH.

As in many areas of medicine, artificial intelligence systems are becoming increasingly used and one such system is a Convolutional Neural Network (CNN), which represents a Deep Learning algorithm that is able to take an input image, assign importance to various aspects or objects within in the image and to differentiate one from the other. In fact, a 2019 systematic review of Deep Learning systems concluded that the ‘diagnostic performance of deep learning models to be equivalent to that of health-care professionals.’ Nevertheless, the authors added the caveat that ‘few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample.’

In the present study, the Danish team undertook a systematic review and meta-analysis to appraise the evidence of CNN in per-patient diagnosis of ICH. They performed a literature review and studies deemed suitable for inclusion were those where: patients had undergone non-contrast computed tomography of the cerebrum for the detection of an ICH; radiologists or a clinical report was used as the reference standard and finally where a CNN algorithm was deployed for the detection of ICH. For the purposes of their analysis, the minimum acceptable reference standard was defined as either manual, semi-automated or automated image labelling taken from radiology reports or electronic health records. For their analysis, the researchers calculated the pooled sensitivity, specificity and the receiver operating characteristics curves (SROC).


A total of six studies with 380,382 scans were included in the final analysis. When comparing the CNN performance to the reference standard, the pooled sensitivity was 96% (95% CI 93 – 97%), pooled specificity 97% (95% CI 90 – 99%) and an SROC of 98% (95% CI 97 – 99%). When combining both retrospective and external validation studies, for CNN, the performance was slightly worse with a pooled sensitivity of 95%, specificity 96% and pooled SROC 98%.

They concluded that CNN-algorithms accurately detect ICHs based on an analysis of both retrospective and external validation studies and that this approach seemed promising but highlighted the need for more studies using external validation test sets with uniform methods to define a more robust reference standard.


Jorgensen MD et al. Convolutional neural network performance compared to radiologists in detecting intracranial hemorrhage from brain computed tomography: A systematic review and meta-analysis. Eur J Radiol 2021