This website is intended for healthcare professionals only.
Take a look at a selection of our recent media coverage:
23rd September 2022
Ultrasound-guided image acquisition of the correct block view and identification of the correct sono-anatomical structure is significantly improved among novice anaesthetists when given access to an assistive artificial intelligence (AI) device according to the findings of a randomised trial by UK researchers.
Successful ultrasound-guided regional anaesthesia requires adequate visualisation of neural and surrounding structures together with monitoring the spread of a local anaesthetic. In fact, the initial challenge presented to a practitioner during ultrasound-guided regional anaesthesia is the interpretation of sono-anatomy upon placing a probe on the patient. The use of AI for real-time anatomy identification can successfully interpret anatomical structures in real-time sonography and assist young anaesthesiologists during ultrasound-guided peripheral nerve block. Moreover, the use of an AI system has been shown to be helpful in identifying specific anatomical structures and for confirming the correct ultrasound view in scans. Nevertheless, while AI systems aid recognition of structures, the additional value of an AI system over that of a novice anaesthetist who has undergone training, is somewhat less clear. For the present study, the UK recruited non-expert anaesthetics who underwent standard training in ultrasound scanning for six peripheral nerve blocks. The researchers undertook a randomised, prospective interventional study to evaluate the impact of an AI system (ScanNavTM) on the performance of these novice anaesthetists during ultrasound scanning for specific nerve blocks. The secondary aim was to determine if the AI system improved upon the correct identification of sono-anatomical structures on the block view. The novices performed a single scan of 6 peripheral nerve blocks while being assessed by an expert. Following this training, all performed a scan for each of the six blocks, half of which were performed using the AI tool.
Ultrasound-guided scanning and the AI support tool
A total of 21 novice anaesthetists were recruited and a total of 126 scans were undertaken. Participants identified the correct block view in 75.1% of scans without the AI system, though this figure increased to 90.3% (p = 0.031) with the system. Additionally, identification of the correct structure improved from 77.4% to 88.8% when using the AI system (p = 0.002). However, there were no significant differences in self-reported confidence or the time required to perform the scan between aided and unaided scans.
The authors concluded that use of the AI system among non-expert anaesthetists, improved ultrasound image acquisition and interpretation, adding that in the future such technology may be of value to augment the performance of non-experts and expand patient access to the technique.
Bowness JS et al. Evaluation of the impact of assistive artificial intelligence on ultrasound scanning for regional anaesthesia Br J Anaesth 2022
10th March 2022
Inclusion of artificial intelligence (AI) assistance improves the detection of fractures for both radiologists and non-radiologists without increasing the reading time. This was the finding from a retrospective analysis by a team from the Departments of Radiology, Orthopaedic Surgery and Family Medicine (D.C.), Boston University School of Medicine, Boston, US.
Diagnostic errors, especially within a busy emergency department can include missed fractures. Indeed, one study of 953 diagnostic errors revealed how 79.7% of these errors were because of missed fractures with the most common reason (77.8%) for the error being misreading radiographs. Furthermore, although the aforementioned study was from 2001, a 2018 Dutch study found that from a total of 25,957 fractures, 289 (1.1%) fractures were missed by emergency care physicians. The authors concluded that adequate training of physicians in radiographic interpretation was essential in order to increase diagnostic accuracy. The use of AI assistance for the detection of fractures has been examined in a number of studies, evaluating fractures in different parts of the body. One study evaluated fractures in 11 body areas, with the authors concluding that there were significant improvements in diagnostic accuracy with deep learning methods, however, the study did not include radiologists to interpret the results.
For the present study, the US team decided to expand upon previous analyses, including not just radiologists but a wide range of clinicians from different specialities such orthopaedic, emergency care, rheumatology and family physicians and fractures from different areas of the body. The AI algorithm was developed using data from 60,170 radiographs with trauma from 22 different institutions and split into a training, validation and internal test set.
The team used a retrospective design and the ground truth was established by two experienced musculoskeletal radiologists with 12 and 8 years of experience, who independently interpreted all of the study scans but without clinical information. For the study, the team included only acute fractures as a positive finding for the study. AI performance was assessed using receiver operating characteristic (ROC) curves, from which sensitivity and specificity values were determined using the area under the curve (AUC) values.
AI assistance and interpretation of fractures
A total of 480 patients with a mean age of 59 years (61.8% female) were included with 350 fractures. Included fractures were present on: feet and ankles, knee and leg, hip and pelvis, hand and wrist, elbow and arm, shoulder and clavicle, rib cage and thoracolumbar spine.
The sensitivity per patient was estimated at 64.8% without AI assistance and 75.2% with assistance, a 10.4% estimated AI effect (p < 0.001 for superiority). The associated specificity was 90.6% without AI and 95.6% with AI, a +5% estimated effect of AI (p = 0.001 for non-inferiority).
In addition, the use of AI assistance, shortened the average reading time by 6.3 seconds per examination. Furthermore, the sensitivity by patient gain was significant in all of the fracture regions examined ranging from +8% to +16.2% (p < 0.05), apart from the shoulder, clavicle and spine, where although there was an increase, this was non-significant.
Based on their findings, the authors concluded that AI assistance improves the sensitivity of fracture detection for both radiologists and other non-radiology clinicians as well as slightly reducing the time required to interpret the radiographs.
Guermazi A et al. Improving Radiographic Fracture Recognition Performance and Efficiency Using Artificial Intelligence Radiology 2022
28th September 2021
Prostate cancer is the second most common cancer in men, with 1.3 million new cases recorded in 2018. Confirmation of a prostate cancer diagnosis can only be achieved via biopsy and subsequent examination of digitalised slides of the biopsy. Now, the first artificial intelligence (AI) software for in vitro diagnostic detection cancer in prostate biopsies has been approved by the FDA in the US. The software is designed to identify an area of interest on the prostate biopsy image with the highest likelihood of harbouring cancer. This alerts the pathologist if the area of concern has not been noticed on their initial review and thus can assist them in their overall assessment of the biopsy slides.
The AI system approved is Paige Prostate and it is anticipated to increase the number of identified prostate biopsy samples with cancerous tissue and ultimately save lives. The FDA approval was based on a study of Paige Prostate undertaken with three pathologists. In the study, which was conducted in two phases, each pathologist was required to assess 232 anonymised whole slide images and asked to dichotomise these as either cancerous or benign, with only 93 slides (40%) that were in fact cancerous. In the first phase, the pathologists assessed the scans alone, whereas in the second phase, 4-weeks later, the same scans were reviewed but this time using the AI software, Paige Prostate.
In the study, the Paige Prostate software alone, had a sensitivity for detecting cancer of 96% and a specificity of 98%. Without the use of Paige Prostate, the pathologists averaged a sensitivity of 74% but with the addition of the AI software, their average sensitivity increased significantly to 90% (p < 0.001). Addition of Paige Prostate mainly improved pathologists’ detection of grade 1 to 3 cancers. However, despite a greater sensitivity from the use of Paige Prostate, there was no significant difference in specificity (p = 0.327) since this was already high at an average of 97% without Paige Prostate.
Source. FDA Press release September 2021
9th October 2020
Now a team from Weill Cornell Medicine, New York, has created an artificial intelligence (AI) system that can use routine test data results to determine if a patient has COVID-19. Normally, clinicians order a battery of blood tests in addition to a PCR test, including routine laboratory tests and a chest X-ray and these results are generally available within 1 – 2 hours. Researchers therefore hypothesised if the results of the routine laboratory test could be used to predict if someone was infected with COVID-19 without the PCR test. The included patient demographics such as age, sex, race into a machine learning model and incorporated the results for 27 routine tests. The laboratory results were made available two days before the PCR test result. The dataset included a total of 5893 patients admitted to hospital between March and April 2020 and they excluded individuals under 18 years of age and those who PCR result was inconclusive and patients without laboratory test results within two days prior to the PCR test.
A total of 3356 patients who were tested for COVID-19 were included with a mean age of 56 years of whom, 1402 were positive and 54% emergency department admissions. Using a machine learning technique known as a gradient boosting decision tree, overall, the algorithm identified COVID-19 positivity with a sensitivity of 76% and a specificity of 81%. However, limiting the analysis to emergency department patients, increased the sensitivity to 80% and the specificity to 83%. Moreover, the algorithm correctly identified those who had a negative COVID-19 test result. A recognised limitation was the testing was specific those admitted to hospital with moderate to severe disease and thus requires further work to identify milder cases.
Nevertheless, the authors concluded that their algorithm is potentially of value in identifying whether patients have COVID-19 before they receive the results of a PCR test.
Yang HS et al. Routine laboratory blood tests predict SARS-CoV-2 infection using machine learning. Clin Chem 2020; https://doi.org/10.1093/clinchem/hvaa200