This website is intended for healthcare professionals only.
Take a look at a selection of our recent media coverage:
23rd September 2022
Radiographer diagnostic performance for screening mammography is no different to radiologists for double reading digital mammograms and therefore offers a potential solution to the shortages of radiologists according to the results of a retrospective study by UK researchers.
Screening mammography is widely used in the detection of breast cancer and has been proven to decrease mortality. Moreover, the rate of cancer detection can be further increased by double reading of scans. For example, one study revealed how the relative increase in cancer detection as a result of a second reviewer was 6.3%. In a 2016 survey, it was found that UK radiographers are already involved with interpreting and reporting images across the full spectrum of clinical indications for mammography including: low-risk population screening, symptomatic, annual surveillance, family history and biopsy/surgical cases. However, despite this change in role, there is limited evidence on the real-life radiographer diagnostic performance in double reading mammograms. For the present study, the UK team examined the performance of radiographers and radiologists for all screening mammograms in England between 2015 and 2016. The researchers used three key metrics for comparison between radiographers and radiologists: the cancer detection rate (CDR); recall rate (RR) and positive predictive value (PPV) of recall on the basis of biopsy-proven pathological findings for the first readers. Each of the breast scans were analysed based on the reader profession (i.e., radiologist or radiographer) and years of experience.
Radiographer diagnostic performance on screening mammography
A total of 401 readers were included and double read the mammograms of 1,404,395 women. There were 224 radiologists who first read 763,958 mammograms and 177 radiographers who first read 640,437 mammograms.
The overall mean CDR was 7.7 per 1000 examinations and the mean radiographer diagnostic performance was 7.53/1000 examinations and 7.84 for radiologists (p = 0.08). When the researchers analysed CDR’s based on years of experience, there was no variation for either profession (p = 0.87).
The overall recall rate was 5% and again there was no significant difference between radiographers and radiologists (5.2% vs 5%, radiographers vs radiologists, p = 0.63) though the RR was lower for those with more years of experience.
Finally, the overall PPV was 16.7% and again differences between radiographers and radiologists were not significant (16.1% vs 17.1%, radiographers vs radiologists, p = 0.42). As with the RR, PPV improved with more years of experience.
The authors concluded that there were no clear differences in radiographer diagnostic performance and radiologists as readers of screening digital mammograms. They speculated that the use of trained radiographers in the double-reading workflow may offer a potential solution to the shortage of radiologists but suggested that more studies were needed to determine if such physician extender roles can and should, be used to read screening mammograms independent of the radiologist.
Chen Y et al. Performance of Radiologists and Radiographers in Double Reading Mammograms: The UK National Health Service Breast Screening Program Radiology 2022
1st August 2022
In a survey of radiographers, nearly a third stated that they did not know how an artificial intelligence (AI) system made its decisions according to the results of a study by Irish and UK researchers.
A radiology workforce report in 2020 said that across the UK, 1 in 10 radiologist positions was unfilled. Nevertheless, the reporting of radiographic images by radiographers (rather than radiologists) is an established practice in the United Kingdom (UK) and immediate reporting of emergency department radiographs by a radiographer has been deemed a cost-effective service development. Despite representing a cost-effective service development, as with radiologists, there is a national shortage of radiographers in the UK, with 2021 report indicating that the average current UK vacancy rate was 10.5% as of November 2020. To aid both radiologists and radiographers, in recent years there has been an increased use of artificial intelligence (AI) technologies for applications in radiology. Moreover, AI system interpretation of imaging is impressive, with one international study finding that an AI system’s ability to identify breast cancer maintained non-inferior performance and reduced the workload of the second reader by 88%. The introduction of AI systems therefore has the potential to help reduce any backlogs in unreported images. But the extent to which radiographers understand and interact with AI technology was the subject of the present survey by the Irish and UK team. They developed a questionnaire focused on AI as used in radiographer reporting and which was initially piloted with a group of 12 radiographers who had a range of different professional backgrounds. A total of 8 questions focussed specifically on AI and its use in radiographer reporting and respondents answered using a 7-point Likert scale (ranging from strongly agree through to strongly disagree). The survey was disseminated via a link posted on professional social media platforms (e.g. LinkedIn and Twitter).
A total of 411 completed surveys responses were received from radiographers working diagnostic and therapeutic radiographer. However, the results of the present study were limited to diagnostic radiographers (86). Perhaps the first and most illuminating finding, was how 89.5% of respondents indicated that they were not currently utilising AI as a part of their reporting role.
In response to a question which asked “I understand how an AI system reaches its decisions” only 28.8% (aggregate value) reported that they ‘somewhat disagreed’, ‘disagreed’ or ‘strongly disagreed’ although 61.6% (aggregate value) that they ‘agreed’ or ‘somewhat agreed’. In addition, the majority of respondents (59.3%) disagreed that they would be confident in explaining the AI system decision to other healthcare professionals and similarly, only 29.1% agreed that they would be confident explaining the decision to patients or their carers.
While 57% reported that they would feel more certain of their diagnosis if an AI system agreed with their interpretation, the majority (69.8%) stated that they would seek a second opinion if the AI system disagreed with them.
Finally, when asked to rate their trust in an AI system for diagnostic image interpretation on a scale of 1 to 10,, the median score was 5. When asked to choose from a list of possible factors that would increase their level of trust in the AI system, the most common responses were ‘overall performance and accuracy of the system (76%), a visual explanation, such as a heat map (67%) and an indication of the confidence of the AI system in its diagnosis (62%).
The authors concluded that while the majority of respondents were not currently routinely using an AI system as part of their reporting, awareness of how clinicians interact with AI systems was needed as this might promote responsible use of such systems in the future.
Rainey C et al. UK reporting radiographers’ perceptions of AI in radiographic image interpretation – Current perspectives and future developments Radiography 2022