This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Press Releases

Take a look at a selection of our recent media coverage:

EADV: Improvement in AI skin cancer detection as software identifies all melanoma cases

13th October 2023

The use of artificial intelligence (AI) software has shown a 100% detection rate for melanoma and saved over 1,000 face-to-face secondary care consultations during a 10-month period, according to a study presented at the recent European Academy of Dermatology and Venereology (EADV) Congress 2023.

The study was able to able to correctly detect all 59 cases of suspected melanoma between April 2022 and January 2023, as well as 99.5% of all skin cancers (189/190 cases) and 92.5% of pre-cancerous lesions (541/585).

The software assessed 22,356 patients with suspected skin cancers using three versions of an AI software. The first version tested in 2020-21 had an 85.9% detection rate for melanoma (195/227), 83.8% for all skin cancer (903/1,078) and 54.1% for pre-cancerous lesions (496/917).

Lead author Dr Kashini Andrew, a specialist registrar at University Hospitals Birmingham NHS Foundation Trust, commented: ‘The latest version of the software has saved over 1,000 face-to-face consultations in the secondary care setting between April 2022 and January 2023, freeing up more time for patients that need urgent attention.‘

The research team noted that the data is ‘incredibly encouraging‘, however, co-author and colleague, Dr Irshad Zaki, consultant dermatologist, said: ‘We would like to stress that AI should not be used as a standalone tool in skin cancer detection and that AI is not a substitute for consultant dermatologists.‘

Evidence of the need for appropriate clinical oversight was shown among the basal cell carcinoma cases as a single case was missed by the AI tool and later identified at a second read by what the researchers termed ‘a dermatologist “safety net“‘.

Dr Kashini Andrew added: ‘This study has demonstrated how AI is rapidly improving and learning, with the high accuracy directly attributable to improvements in AI training techniques and the quality of data used to train the AI.

‘The role of AI in dermatology and the most appropriate pathway are debated. Further research with appropriate clinical oversight may allow the deployment of AI as a triage tool. However, any pathway must demonstrate cost-effectiveness, and AI is currently not a standalone tool in dermatology. Our data shows the great promise of AI in future provision of healthcare.’

This supports the findings of previous studies including a 2022 systematic review in which the researchers concluded that ‘the performance of artificial intelligence in melanoma is satisfactory and the future for potential applications is enormous‘.

AI-supported mammography found to reduce radiologist workload in randomised trial

9th August 2023

Using an AI-supported mammography screening tool results in a similar breast cancer detection rate compared with standard double reading but with a substantially lower screen-reading workload, according to the interim safety findings of a new randomised controlled trial.

Making use of AI-supported software, researchers from Lund University in Sweden, have shown that a screening mammography avoids the need for double reading of all mammograms, without increasing false positives and almost halving radiologists‘ screen-reading workload.

Although previous retrospective analyses have indicated that combining AI with a radiologist improves the accuracy of breast cancer detection and reduces radiologist workload, there have been no randomised trials evaluating this approach until now.

Commenting on the findings, lead author Dr Kristina Lång said: ‘These promising interim safety results should be used to inform new trials and programme-based evaluations to address the pronounced radiologist shortage in many countries. But they are not enough on their own to confirm that AI is ready to be implemented in mammography screening.

‘We still need to understand the implications on patients’ outcomes, especially whether combining radiologists’ expertise with AI can help detect interval cancers that are often missed by traditional screening, as well as the cost-effectiveness of the technology’.

AI vs standard double reading

Published in The Lancet Oncology, the Mammography Screening with Artificial Intelligence (MASAI) trial enrolled 80,033 Swedish women aged 40-80 years who were eligible for mammography screening. Participants were randomly allocated 1:1 to either AI-supported screening (the intervention group, n = 40,003) or standard double reading without AI (the control group, n = 40,030).

The primary outcome measure of the MASAI trial was the interval cancer rate. Secondary outcomes examined included early screening performance (cancer detection rate, recall rate, false positive rate) and screen-reading workload (number of screen readings and consensus meetings).

The AI-supported system provided an examination-based malignancy risk score on a continuous scale ranging from 1 to 10. These examination were then categorised as either low risk (risk score 1 to 7), intermediate risk (risk scores 8 and 9) or high risk (risk score 10). In the intervention group, examinations with risk scores of 1 to 9 underwent single reading, whereas examinations with risk scores of 10 underwent double reading.

The cancer detection rate (per 1,000 screened women) was broadly similar, with a rate of 6.1% for the AI group and 5.1% in the control group. Similarly, recall rates were also not significantly different (2.2% vs 2.0%) and neither were the false positive rates (1.5% in both arms).

The number of screen readings was considerably lower for the AI-supported group (46,345 vs 83,231), representing a 44.3% workload decrease for reading screening mammograms.

Tailoring diagnosis and treatment pathways for complex lung disease

11th May 2023

Dr Anjali Crawshaw shares insights into her work on idiopathic pulmonary fibrosis and how AI may be able to extend the survival rates of people living with complex lung disease.

University Hospitals Birmingham NHS Foundation Trust has recently launched what it understands to be a world-first project aiming to improve the survival rates of people living with fibrotic lung disease, including idiopathic pulmonary fibrosis (IPF).

Lung disease clinicians and researchers will use sophisticated algorithms developed by the Cambridge medical data company Qureight to read patient lung scans. The goal is to help improve understanding of fibrotic lung diseases and make more accurate and earlier diagnoses, facilitating earlier treatment.

In addition, the project will analyse significant volumes of data from ethnic minority groups to address health inequalities in the system and allow for a tailored approach to treatment for these individuals.

Dr Anjali Crawshaw is consultant respiratory physician lead, Birmingham Interstitial Lung Disease Unit, University Hospitals Birmingham NHS Foundation Trust. Here, she explains why complex inflammatory and fibrotic lung diseases – her area of specialism – can be challenging to manage in clinic and how the research will help unlock valuable insights from existing patient data.

What is idiopathic pulmonary fibrosis?

Idiopathic pulmonary fibrosis (IPF) is the most common type of fibrotic lung disease that affects roughly 50 in every 100,000 people. It causes the lungs to become scarred, leading to cough, severe breathlessness and progressive respiratory failure. It currently has a survival time worse than most cancers.

Why is inflammatory fibrotic lung disease so difficult to diagnose?

It can be difficult to classify the disease due to the complex and varied patterns seen. In addition, deciding if this is responding to treatment, is stable or getting worse can be challenging. It is currently necessary for specialist radiology doctors to analyse CT scan images of lungs as part of the diagnosis and monitoring process, but the process can be open to interpretation bias. One of the widely accepted and published difficulties in this field, is that if you have multiple doctors looking at the same scan, you won’t always get the same answer. One of the advantages of having good quality computer standardised algorithms is that you will.

In addition to a lung doctor specialising in such lung conditions, our multidisciplinary teams involve radiologists, pathologists, specialist nurses and pharmacists who currently make a diagnosis based on the patient history, blood tests and CT imaging. In more complex cases, invasive investigations such as a telescope test into the lungs may be required, which is not without risk. This allows a biopsy to be taken, although sometimes a more invasive biopsy is still required to make a clear diagnosis. Improved imaging techniques have reduced the number of biopsies required.

There’s a shortage of specialists, which can make this process slow and difficult.

What are the limitations with the current healthcare dataset?

One of the problems in healthcare in general is that a lot of our data comes from white people of European descent. There’s partly an assumption that this is the data set we’ve got, and everybody’s healthcare can be extrapolated from this.

That’s not quite right, but we don’t know how that’s not quite right. For example, the lung function of a person of Indian origin born in the US may be better than a relative the same age and build born in India. We don’t really know why that is. There are lots of sociological and environmental factors that are at play here, and we don’t understand what those are.

Idiopathic pulmonary fibrosis – just one of a huge number of fibrotic lung diseases – is another example where unconscious bias may come into play. The ‘typical’ IPF patient is a 70-year-old white man, so a patient from an ethnic minority background presenting with the same symptoms may be at risk of delayed diagnosis.

I look after a lot of people with sarcoidosis who can also develop fibrotic lung disease. They are often much younger and of working age. There’s a greater prevalence in people who are black, and their disease is often more severe, but we just don’t fully understand why that is – the data is not there.

How will the AI tool work for diagnosing lung disease?

All the patients who come through our service get CT scans as part of their diagnostic process. The study algorithm will combine the data from patient scans – for example, their lung and airway volume – with lung function data from tests, blood results and demographic records.

This information will be securely and anonymously processed to deliver insights into the presentation, development and progression of IPF. We will look specifically at the similarities and differences for ethnic minority patients.

Why is Birmingham so uniquely placed to collect this patient data?

We’re a young, super-diverse city. We’re home to people from 187 different nationalities, and more than half the population is from an ethnic minority, so we are perfectly placed to be leading on this work.

Part of the reason we’re missing this data is because you need a certain amount of money and funding to conduct studies. If research is happening in rich countries that have good access to CT imaging that will, by virtue, skew the population of patients in the database as you’re using data from the patients in front of you. Places in other parts of the world have the expertise and drive to do the research, but they don’t have the funding or access to good CT imaging so it doesn’t get done. 

This partnership with Qureight marks a very significant moment for our team. Patient data that truly reflects the unique diversity of Birmingham’s population will be invaluable to the planning and delivery of more equitable patient care – not just in Birmingham and the UK but internationally.

The role of AI in transforming lung cancer care 

Dr Sumeet Hindocha has a passion for artificial intelligence, with his work focusing on radiomics and deep learning in lung cancer. He speaks to Hospital Healthcare Europe about his latest research and the uses and considerations of AI-enabled diagnostics in medicine.

Dr Sumeet Hindocha is a clinical oncology specialist registrar at The Royal Marsden NHS Foundation Trust and a researcher in artificial intelligence (AI). He is currently leading the trust’s OCTAPUS-AI study to investigate how this technology can help identify which patients with non-small cell lung cancer are at higher risk of recurrence.

Why are you interested in lung cancer and AI?

Lung cancer is the leading cause of cancer deaths worldwide. Non-small cell lung cancer (NSCLC) is behind almost 85% of cases and is often curable when detected early enough. Radiotherapy is a key treatment modality for it, but, unfortunately, recurrence can occur in over a third (36%) of patients treated with radiotherapy.

We know that the earlier we detect recurrence the better the outcomes generally are for patients. It means we can get them on to the next line of treatment or offer the best support as soon as possible. This could reduce the impact the disease has on their lives and help patients live longer.

The aim of our study is to see whether AI could help identify the risk of cancer returning in these patients using CT scans. The study addresses the National Institute of Healthcare and Clinical Excellence’s call for further research into using prognostic factors to develop risk-stratification models to inform optimal surveillance strategies after treatment for lung cancer.

Where does your enthusiasm for AI stem from?

Artificial intelligence has had a big impact in improving various aspects of our lives and work, from automating routine tasks to even things like the programmes recommended to us on Netflix or smart home devices like Siri or Alexa. What’s really exciting about its application in healthcare is its significant potential to improve patient outcomes and experience. We have a huge amount of data from imaging and electronic patient records that can be readily applied to AI. It gives us the ability to detect patterns of disease that would otherwise be difficult to uncover, to develop new drugs and even streamline how we deliver healthcare.

Who are you working with on the OCTAPUS-AI study?

Researchers from the Institute of Cancer Research, Imperial College London and the Early Diagnosis and Detection Centre, which aims to accelerate early diagnosis of cancer and is supported by funding from the Royal Marsden Cancer Charity and the National Institute for Health and Care Research. 

What did the first phase of the study involve?

We compared different models of machine learning (ML) – a type of type of AI that enables computer software to learn complex data patterns and automatically predict outcomes – to determine which could most accurately identify NSCLC patients at risk of recurrence following curative radiotherapy.

Anonymised, routinely available clinical data from 657 NSCLC patients treated at five UK hospitals was used to compare different ML algorithms based on various prognostic factors such as age, gender and the tumour’s characteristics on scans to predict recurrence and survival at two years from their treatment. We then developed and tested models to categorise patients into low and high risk of recurrence, recurrence-free survival and overall survival.

A patient’s tumour size and stage, the type and intensity of radiotherapy, and their smoking status, BMI and age were the most important clinical factors in the final AI model’s algorithm for predicting patient outcomes.

The results suggested that this technology could be used to help personalise, and therefore improve, the surveillance of patients following treatment based on their risk. This could lead to recurrence being detected earlier in high-risk patients, ensuring that they receive urgent access to the next line of treatment that could potentially improve their outcomes. 

Results from the second phase of the study were recently published. Can you tell us more about this work?

In this phase, as well as clinical data, we used imaging data describing the tumours’ characteristics – a technique known as radiomics – taken from radiotherapy treatment planning CT scans on over 900 NSCLC patients in the UK and Netherlands.

Radiomic data can also be linked with biological markers. We believe it could be a useful tool in both personalising medicine and improving post-treatment surveillance. This data was used to develop and test ML models to see how accurately they could predict recurrence. 

The TNM staging system, which describes the amount and spread of cancer in a patient’s body, is the current gold standard in predicting prognosis. However, our model was found to better correctly identify which NSCLC patients were at a higher risk of recurrence within two years of completing radiotherapy than a model built on the TNM staging system.

How could your findings benefit patients?

We are at an early stage, and there’s a lot more work to do before we have a tool ready for use in the clinic. However, our results suggest that our AI model could be better at predicting tumour regrowth than traditional methods. This means that, using our technology, clinicians may eventually be able to identify which patients are at a higher risk of recurrence and offer them more targeted follow up. If recurrence did occur, this would be detected earlier so patients could be offered the next line of treatment as soon as possible. Meanwhile, low-risk patients could potentially be spared unnecessary follow-up scans and hospital visits.

This is also an exciting project because we don’t have to put patients through extra procedures for the model to work, as the data is routinely collected during the course of their normal treatment. Furthermore, in theory, there’s no reason why we can’t adapt the same tool to predict recurrence for other cancers.

What are the next steps?

So far, we’ve looked at CT scans and clinical data. We know from other areas of research [see next question] that some models have been developed using other patient data, for instance previous biopsy results or blood markers.

The next stage would look to improve the performance of the algorithm with more advanced AI techniques, such as deep learning or multimodal approaches, that incorporate different forms of data. Once the model is optimised, the next stage would likely be a prospective study to see if it can accurately predict risk of recurrence in patients currently starting radiotherapy treatment.

Have you published any other papers on AI recently, and what were the conclusions?

Our group has published a review paper that provides an overview of how AI is being used across the spectrum of cancer care, from screening and diagnosis through to treatment and follow up. We explore its implementation in primary care, radiology, pathology and oncology.

AI application in healthcare data has the potential to revolutionise early cancer diagnosis and may provide support for capacity concerns through automation. It can also allow us to effectively analyse complex data from many modalities, including clinical text, genomic, metabolomic and radiomic data.

In the review, we discuss myriad convolutional neural network – or CNN – models that can detect early-stage cancers on scan or biopsy images with high accuracy. Some had a proven impact on workflow triage. Many commercial solutions for automated cancer detection are becoming available, and we are likely to see increasing adoption in the coming years.

What other advantages could the adoption of AI bring to the sector, and what are some of the cons?

One of the biggest challenges facing healthcare right now is increasing demand, more complex cases and a shortage of workers. AI could augment our workflow, not replacing people, but doing some of the easier jobs so staff can focus on the more challenging tasks.

In the setting of patient decision-support, caution is needed to ensure that models are robustly validated before use.

In our review, we also highlight several challenges around the implementation of AI, including data anonymisation and storage, which can be time-consuming and costly for healthcare institutions.  

We also discuss model bias, including the under-reporting of important demographic information, such as race and ethnicity, and the implications this can have on generalisability.

In terms of how study quality and model uptake can be improved going forwards, quality assurance frameworks, such as SPIRIT-AI, and methods to standardise radiomic feature values across institutions, as proposed by the image biomarker standardisation initiative, may help. Moreover, disease-specific, gold-standard test sets could help clinicians benchmark multiple competing models more readily. 

Despite the above challenges, the implications of AI for early cancer diagnosis are highly promising, and this field is likely to grow rapidly in the coming years.

Championing a health sector-specific approach to AI

28th April 2023

Artificial intelligence is a complex phenomenon. It will impact the way medical research is conducted, how biomedical data are used, and how healthcare professions and organisations are regulated.

In 2021, the European Hospital and Healthcare Federation (HOPE) published a position paper on artificial intelligence (AI). Here, the organisation’s chief executive Pascal Garel provides an update on this and outlines recommendations on how to ensure that the application of AI in healthcare benefits patients and consumers alike.

What would HOPE envisage as the essential components of a European-wide operational definition of artificial intelligence?

A Europe-wide operational definition cannot be implemented as what is perceived as ‘being for the common good’ in one sector might be ethically unacceptable in another. A rigid technical definition risks excluding less complex AI-based systems from the legal framework. An insufficiently clear definition could also inspire different legal interpretations at a national level, thereby defeating its purpose. 

That is why HOPE is advocating a health sector-specific approach to AI. Personal health data are a particularly sensitive category. Leakage or misuse could lead to severe consequences and negative health outcomes. Members of vulnerable groups are especially powerless when it comes to refuting AI-enabled results or administrative decisions about entitlements. Safeguarding fundamental rights, data and privacy protection, and ensuring the safety and security of individuals contributing or using data, are essential. 

While a risk-based approach as outlined in the European Commission’s AI Act has its merits, particularly in sectors where AI deployment is straightforward and the risks of abuse are minor, in a health context the nuances are stronger. Extra care is needed to prevent seemingly ‘low risk’ AI systems inadvertently harming individuals, by revealing their identities or drawing conclusions about them based on biased data. For example, fitness and wellness applications are commercial products and their standards and purposes differ from those of actual medical devices used in healthcare environments.

The uses outlined in the AI Act, as well as in the European Health Data Space (EHDS) are broad, to exploit the market potential of AI solutions as much as possible. Uses must be balanced with ethical and human rights considerations to build support for trustworthy AI. 

What are the relevant EU stakeholder groups?

Regarding the definition of AI in health, the health community should be provided with ample opportunities to co-shape AI policies. The debate is not only about technology, but also about the future of health systems and how healthcare is provided. This includes Member States’ ability to protect vulnerable populations from the ‘AI frenzy’ of Big Tech firms eager to reap profits from securing vast amounts of personal health data. It is about the impact of AI on everybody’s lives.

On the other hand, AI holds great potential to improve healthcare provision and health research. It could even ‘re-humanise’ healthcare if (co-)developed and deployed in an ethical, transparent, and inclusive way. It could, for example, take care of routine health administration and documentation tasks and provide crucial decision support in diagnosis, treatment and follow-up. Many medical disciplines could gain from AI support and researchers could uncover links that were hitherto impossible to detect. Discussions should include patients, healthcare professionals, consumers, researchers, public health experts, and representatives of vulnerable groups and human rights groups.

Should the definition of AI include the range of healthcare settings to which it is applicable and of possible benefit?

Specifying the relevant healthcare settings that can benefit from AI would be beneficial to avoid any loopholes where the agreed rules do not apply. These might not need to be part of the definition of AI as such but could be outlined in a healthcare-specific protocol as part of the legislation. 

It is difficult to capture the diversity of healthcare provision today as many Member States are experimenting with new models of care to lessen the pressure felt by the hospital sector. The AI legal framework should nonetheless reflect this multiplicity. This includes public, private and other categories of hospitals – whether focusing on providing general or specialised services – nursing homes and other long-term care facilities, outpatient facilities, ‘virtual wards’ and at-home care provision.  

Should a legal framework for integration be developed only after specific arenas for which there are clear benefits are defined?

Artificial intelligence is used widely today, and new solutions come to market every week. A legal framework should be devised without delay. This framework needs to balance fostering AI innovation and international collaboration with being mindful of the consequences that could arise from the increased reliance on AI, including physical or mental harm and violation of fundamental rights. AI systems could potentially be adapted to uses other than their declared purposes. They could be deliberately or unknowingly misused and the results biased for various reasons.

Implementing AI on agreed socially and ethically acceptable uses would improve confidence. Instead, the proposed legal framework (AI Act coupled with the AI-relevant provisions in the European Health Data Space) is rather confusing and does not provide adequate information about what trustworthy AI in healthcare will look like in practice as the potential uses remain indistinct.

What are the potential and real barriers to EU-wide adoption of a working definition of AI?

AI serves many different purposes in different fields. Dependence on it will only increase as Europe wishes to reap benefits from harvesting data across sectors. This is one of the biggest real barriers. Different sectors have their own AI needs and have developed their own terminologies, which complicates devising a common working definition. However, a broad and vague overall definition of AI – if supported by more precise sectoral delineations – is still better than a very narrow definition. The latter could lead to exclusion of certain categories of systems or else encourage developers to maintain that their systems do not classify as AI although they contain very similar features.

Creating a legislative framework is not only about terminology, but also about understanding the broader environment in which AI operates. Like any other technology, AI is dependent on infrastructure, the availability of quality data, and financial and human resources. In healthcare, developing – and properly communicating – a clear strategic vision for AI would alleviate common fears. And which would catapult it from ‘science fiction’ to a recognised tool for health systems to improve quality of care.

Another key barrier is that the European AI framework intersects and overlaps with many other existing or proposed EU laws or initiatives covering mandatory responsibilities for manufacturers and users of digital technologies and data. Proper implementation of GDPR must not be hampered by the development of an AI-specific legal architecture. This includes sectoral products (e.g., Machinery Directive, General Product Safety Directive) and legislation dealing with data liability and safety (e.g., Data Governance Act, Open Data Directive). Healthcare-relevant EU legislation must also be adapted.

Building a robust and future-oriented cybersecurity legal framework will be especially important for the development and protection of a rights-based, human-centric European approach to artificial intelligence. 

Ultrasound-guided acquisition of correct block view in novice anaesthetists improved by AI

23rd September 2022

The ultrasound-guided acquisition of the correct block view is significantly better among novice anaesthetists with AI assistance

Ultrasound-guided image acquisition of the correct block view and identification of the correct sono-anatomical structure is significantly improved among novice anaesthetists when given access to an assistive artificial intelligence (AI) device according to the findings of a randomised trial by UK researchers.

Successful ultrasound-guided regional anaesthesia requires adequate visualisation of neural and surrounding structures together with monitoring the spread of a local anaesthetic. In fact, the initial challenge presented to a practitioner during ultrasound-guided regional anaesthesia is the interpretation of sono-anatomy upon placing a probe on the patient.

The use of AI for real-time anatomy identification can successfully interpret anatomical structures in real-time sonography and assist young anaesthesiologists during ultrasound-guided peripheral nerve block. Moreover, the use of an AI system has been shown to be helpful in identifying specific anatomical structures and for confirming the correct ultrasound view in scans.

Nevertheless, while AI systems aid recognition of structures, the additional value of an AI system over that of a novice anaesthetist who has undergone training, is somewhat less clear. For the present study, the UK recruited non-expert anaesthetics who underwent standard training in ultrasound scanning for six peripheral nerve blocks.

The researchers undertook a randomised, prospective interventional study to evaluate the impact of an AI system (ScanNav) on the performance of these novice anaesthetists during ultrasound scanning for specific nerve blocks.

The secondary aim was to determine if the AI system improved upon the correct identification of sono-anatomical structures on the block view. The novices performed a single scan of 6 peripheral nerve blocks while being assessed by an expert. Following this training, all performed a scan for each of the six blocks, half of which were performed using the AI tool.

Ultrasound-guided scanning and the AI support tool

A total of 21 novice anaesthetists were recruited and a total of 126 scans were undertaken. Participants identified the correct block view in 75.1% of scans without the AI system, though this figure increased to 90.3% (p = 0.031) with the system.

Additionally, identification of the correct structure improved from 77.4% to 88.8% when using the AI system (p = 0.002). However, there were no significant differences in self-reported confidence or the time required to perform the scan between aided and unaided scans.

The authors concluded that use of the AI system among non-expert anaesthetists, improved ultrasound image acquisition and interpretation, adding that in the future such technology may be of value to augment the performance of non-experts and expand patient access to the technique.

Bowness JS et al. Evaluation of the impact of assistive artificial intelligence on ultrasound scanning for regional anaesthesia Br J Anaesth 2022

AI assistance improved X-ray fracture detection with no increase in reading time

10th March 2022

Artificial intelligence (AI) assistance increased x-ray detection of fractures for radiologists and without increasing the reading time

Inclusion of artificial intelligence (AI) assistance improves the detection of fractures for both radiologists and non-radiologists without increasing the reading time. This was the finding from a retrospective analysis by a team from the Departments of Radiology, Orthopaedic Surgery and Family Medicine (D.C.), Boston University School of Medicine, Boston, US.

Diagnostic errors, especially within a busy emergency department can include missed fractures. Indeed, one study of 953 diagnostic errors revealed how 79.7% of these errors were because of missed fractures with the most common reason (77.8%) for the error being misreading radiographs.

Furthermore, although the aforementioned study was from 2001, a 2018 Dutch study found that from a total of 25,957 fractures, 289 (1.1%) fractures were missed by emergency care physicians. The authors concluded that adequate training of physicians in radiographic interpretation was essential in order to increase diagnostic accuracy.

The use of AI assistance for the detection of fractures has been examined in a number of studies, evaluating fractures in different parts of the body. One study evaluated fractures in 11 body areas, with the authors concluding that there were significant improvements in diagnostic accuracy with deep learning methods, however, the study did not include radiologists to interpret the results.

For the present study, the US team decided to expand upon previous analyses, including not just radiologists but a wide range of clinicians from different specialities such orthopaedic, emergency care, rheumatology and family physicians and fractures from different areas of the body.

The AI algorithm was developed using data from 60,170 radiographs with trauma from 22 different institutions and split into a training, validation and internal test set.

The team used a retrospective design and the ground truth was established by two experienced musculoskeletal radiologists with 12 and 8 years of experience, who independently interpreted all of the study scans but without clinical information.

For the study, the team included only acute fractures as a positive finding for the study. AI performance was assessed using receiver operating characteristic (ROC) curves, from which sensitivity and specificity values were determined using the area under the curve (AUC) values.

AI assistance and interpretation of fractures

A total of 480 patients with a mean age of 59 years (61.8% female) were included with 350 fractures. Included fractures were present on: feet and ankles, knee and leg, hip and pelvis, hand and wrist, elbow and arm, shoulder and clavicle, rib cage and thoracolumbar spine. 

The sensitivity per patient was estimated at 64.8% without AI assistance and 75.2% with assistance, a 10.4% estimated AI effect (p < 0.001 for superiority). The associated specificity was 90.6% without AI and 95.6% with AI, a +5% estimated effect of AI (p = 0.001 for non-inferiority).

In addition, the use of AI assistance, shortened the average reading time by 6.3 seconds per examination. Furthermore, the sensitivity by patient gain was significant in all of the fracture regions examined ranging from +8% to +16.2% (p < 0.05), apart from the shoulder, clavicle and spine, where although there was an increase, this was non-significant.

Based on their findings, the authors concluded that AI assistance improves the sensitivity of fracture detection for both radiologists and other non-radiology clinicians as well as slightly reducing the time required to interpret the radiographs.

Guermazi A et al. Improving Radiographic Fracture Recognition Performance and Efficiency Using Artificial Intelligence Radiology 2022

FDA approves AI software to aid detection of prostate cancer

28th September 2021

AI software designed to identify an area on a prostate biopsy image with a high likelihood of cancer has received FDA approval.

Prostate cancer is the second most common cancer in men, with 1.3 million new cases recorded in 2018. Confirmation of a prostate cancer diagnosis can only be achieved via biopsy and subsequent examination of digitalised slides of the biopsy. Now, the first artificial intelligence (AI) software for in vitro diagnostic detection cancer in prostate biopsies has been approved by the FDA in the US. The software is designed to identify an area of interest on the prostate biopsy image with the highest likelihood of harbouring cancer. This alerts the pathologist if the area of concern has not been noticed on their initial review and thus can assist them in their overall assessment of the biopsy slides.

The AI system approved is Paige Prostate and it is anticipated to increase the number of identified prostate biopsy samples with cancerous tissue and ultimately save lives. The FDA approval was based on a study of Paige Prostate undertaken with three pathologists. In the study, which was conducted in two phases, each pathologist was required to assess 232 anonymised whole slide images and asked to dichotomise these as either cancerous or benign, with only 93 slides (40%) that were in fact cancerous. In the first phase, the pathologists assessed the scans alone, whereas in the second phase, 4-weeks later, the same scans were reviewed but this time using the AI software, Paige Prostate.


In the study, the Paige Prostate software alone, had a sensitivity for detecting cancer of 96% and a specificity of 98%. Without the use of Paige Prostate, the pathologists averaged a sensitivity of 74% but with the addition of the AI software, their average sensitivity increased significantly to 90% (p < 0.001). Addition of Paige Prostate mainly improved pathologists’ detection of grade 1 to 3 cancers. However, despite a greater sensitivity from the use of Paige Prostate, there was no significant difference in specificity (p = 0.327) since this was already high at an average of 97% without Paige Prostate.

Source. FDA Press release September 2021

AI predicts COVID-19 status before a PCR test

9th October 2020

Current testing for COVID-19 relies on a PCR test using nasopharyngeal swabs although results can take up to 48 hours and sometimes even longer.

Now a team from Weill Cornell Medicine, New York, has created an artificial intelligence (AI) system that can use routine test data results to determine if a patient has COVID-19. Normally, clinicians order a battery of blood tests in addition to a PCR test, including routine laboratory tests and a chest X-ray and these results are generally available within 1 – 2 hours. Researchers therefore hypothesised if the results of the routine laboratory test could be used to predict if someone was infected with COVID-19 without the PCR test. The included patient demographics such as age, sex, race into a machine learning model and incorporated the results for 27 routine tests. The laboratory results were made available two days before the PCR test result. The dataset included a total of 5893 patients admitted to hospital between March and April 2020 and they excluded individuals under 18 years of age and those who PCR result was inconclusive and patients without laboratory test results within two days prior to the PCR test.

A total of 3356 patients who were tested for COVID-19 were included with a mean age of 56 years of whom, 1402 were positive and 54% emergency department admissions. Using a machine learning technique known as a gradient boosting decision tree, overall, the algorithm identified COVID-19 positivity with a sensitivity of 76% and a specificity of 81%. However, limiting the analysis to emergency department patients, increased the sensitivity to 80% and the specificity to 83%. Moreover, the algorithm correctly identified those who had a negative COVID-19 test result. A recognised limitation was the testing was specific those admitted to hospital with moderate to severe disease and thus requires further work to identify milder cases.

Nevertheless, the authors concluded that their algorithm is potentially of value in identifying whether patients have COVID-19 before they receive the results of a PCR test.

Yang HS et al. Routine laboratory blood tests predict SARS-CoV-2 infection using machine learning. Clin Chem 2020;