This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter    Login            

Press Releases

Take a look at a selection of our recent media coverage:

Sponsored: High-level disinfection of ultrasound probes

8th July 2021

A large population-level study has revealed an unacceptable risk of infection following endocavitary ultrasound procedures. Nanosonics is intent on ensuring that vulnerable patients are protected from the risk of cross-contamination

Support for the development of this advertorial was provided by Nanosonics

Patients can be at risk from ultrasound-associated infections when low-level disinfection (LLD) is the standard of care. In order to quantify this risk, Scotland’s National Health Service undertook a retrospective analysis of microbiological and prescription data through linked national health databases. Patient records were examined in the 30-day period following semi-invasive ultrasound probe (SIUP) procedures.

The study analysed almost one million patient journeys that occurred during a six-year period from 2010.1

Of the 982,911 patients followed, 330,500 were gynaecological patients; and 60,698 of these gynaecological patients had undergone a transvaginal (TV) ultrasound procedure. These patients were found to be at a 41% greater risk of infection and a 26% greater risk of needing an antibiotic prescription in the 30 days following their transvaginal ultrasound procedure when compared to gynaecological patients who had not undergone a transvaginal ultrasound.

During the study period, 90.5% of facilities reported that they were performing low level disinfection for transvaginal ultrasound probes. These patients were at a greater risk of infection due to inadequate reprocessing and the study concluded that: “Hence failure to comply with existing guidance on [high-level disinfection] of SIUPs will continue to result in an unacceptable risk of harm to patients .”1

The diverse use of ultrasound probes is now prompting a renewed focus on correct probe reprocessing to ensure patient safety. To ensure best practice standards, decontamination experts and ultrasound users need to work together to reduce the risk of infection that is associated with using ultrasound probes. 

Ultrasound procedures are performed in various inpatient and outpatient settings by a wide range of health professionals. This has increased the use of surface probes to guide procedures such as biopsies, cell retrieval, cannulation, catheterisation, injections, ablations, surgical aspirations, and drainages. Across these procedures, the probe has the potential to contact various patient sites – including intact skin, non-intact skin, mucous membranes and sterile tissue. This presents a complex challenge, as contact with these various body sites requires differing levels of disinfection or sterilisation between patient uses. Failure to adequately clean and disinfect medical devices like ultrasound probes between patients poses a serious risk to patient safety. 

In 2012, a patient in Wales died from a hepatitis B infection – most likely caused by a failure to appropriately decontaminate a transoesophageal echocardiography probe between patients. As a result of this fatality, a Medical Device Alert was issued by the Medicines and Healthcare Products Regulatory Agency (UK) advising users to appropriately decontaminate all types of reusable ultrasound probes.2

The UK and European guidelines require ultrasound probes that come into contact with mucous membranes and non-intact or broken skin to be high-level disinfected. In particular, automated and validated processes for ultrasound reprocessing are preferred. This is supported by a study relating to manual disinfection methods, which found that only 1.4% of reprocessing systems were fully compliant when using manual methods, compared to 75.4% when using semi-automated disinfection methods.3

The Spaulding classification system

The Spaulding classification system4 must be applied before a procedure commences so that information about what tissues or body sites may be contacted is taken into account.

This classification system is a widely adopted disinfection framework for classifying medical devices, based on the degree of infection transmission risk, and requires the following approaches:

  • Critical devices are defined as those that come into contact with sterile tissue or the bloodstream. Probes in this category should generally be cleaned and sterilised. Where sterilisation is not possible, high-level disinfection is acceptable with the use of a sterile cover for ultrasound probes.
  • Semi-critical devices contact intact mucous membranes and do not ordinarily penetrate sterile tissue. Ultrasound probes scanning over non-intact skin are also considered semi-critical. Semi-critical ultrasound probes include endocavitary probes, which should be used with a cover in addition to being high-level disinfected.
  • Non-critical devices only contact intact skin. This category also includes contact surfaces that are not intended for patient contact in health settings. These devices and surfaces should be cleaned and low level disinfected.

It is important to note the difference between cleaning and low-level disinfection. Cleaning is the removal of soil and visible material until the item is clean by visual inspection. Low level disinfection is the elimination of most bacteria, some fungi and some viruses.

A final and important point for consideration is the use of probe covers. 

While many ultrasound users and sonographers believe that their transvaginal ultrasound patients are protected from infection risk by using barrier shields and/or condoms, research has shown that up to 13% of condoms fail and up to 5% of commercial covers fail. Probe covers may have microscopic tears or breakages which can allow microorganisms to pass through.5

Conclusion

Ultrasound users should work with their decontamination colleagues to understand the current UK and European guidelines for reprocessing ultrasound probes. There are patient risks associated with ultrasound usage when proper disinfection procedures are not followed, as well as from ancillary products such as contaminated ultrasound gel. While the increased use of ultrasound has brought many benefits for patients, effective education and disinfection protocols are required to minimise the risk of infection.

Automated high-level disinfection

The trophon® system is designed to reduce the risks of infection transmission through automated high-level disinfection of transvaginal, transrectal and surface probes. With over 25,000 units operating worldwide, 80,000 people each day are protected from the risk of cross-contamination with trophon devices. As a fully enclosed system, trophon2 can be placed at the point of care to integrate with clinical workflows and maintain patient throughput. trophon technology# uses proprietary hydrogen peroxide disinfectant that is sonically activated to create a mist. Free radicals in the mist have oxidative properties enabling the disinfectant to kill bacteria, fungi and viruses. The mist particles are so small that they reach crevices, grooves and imperfections on the probe surface. Nanosonics works collaboratively with probe manufacturers to carry out extensive probe compatibility testing. More than 1000 surface and intracavity probes from all major and many specialist probe manufacturers are approved for use with trophon devices.

# The trophon family includes the trophon EPR and trophon2 devices which share the same core technology of sonically-activated hydrogen peroxide.

References

  1. Scott D et al. Risk of infection following semi-invasive ultrasound procedures in Scotland, 2010 to 2016: A retrospective cohort study using linked national datasets. Ultrasound 2018;26(3):168–77.
  2. Medicines and Healthcare products Regulatory Agency (MHRA). Medical Device Alert. Reusable transoesophageal echocardiography, transvaginal and transrectal ultrasound probes (transducers) Document: MDA/2012/037. 2012.
  3. Ofstead CL et al. Endoscope reprocessing methods: Prospective study on the impact of human factors and automation. Gastroenterol Nurs 2010;33(4):304–11.
  4. Spaulding EH. Chemical disinfection of medical and surgical materials. In: Lawrence C, Block SS, editor. Disinfection, sterilization, and preservation. Lea & Febiger Philadelphia (PA); 1968:517–31.
  5. Basseal JM, Westerway SC, Hyett JA. Analysis of the integrity of ultrasound probe covers used for transvaginal examinations. Infect Dis Health 2020 Mar;25(2):77–81.

Combined PET/MRI scanning identifies features of sports-related brain injuries

Sports-related concussion (SRC) occurs when an external force is transmitted to the head and produces transient neurological symptoms. However, there is increasing evidence that individuals who have experienced repeated SRCs when examined at autopsy, are found to display an accumulation and aggregation of the protein, tau, which helps stabilise neurons combined with persistent neuroinflammation. In addition, traumatic brain injury (TBI) is a chronic disease, which leads to progressive white matter atrophy and persistent inflammation. It is possible therefore that repeated SRC might represent a harbinger of TBI but the evidence for this possible association is based on the findings from autopsies. 

Is it possible therefore, wondered a team from the Department of Clinical Sciences, Lund University, Sweden, that imaging of the brains of individuals who have suffered SRCs and those with TBI might reveal similar changes? 

The researchers recruited healthy young adults, who served as controls, athletes who had previously experienced SRCs and individuals with moderate-to-severe TBI. For the study, the researchers combined the use of positron emission tomography (PET) and magnetic resonance imaging to view images of the brains of their subjects. On the day of the scans, all participants were assessed using the repeated battery assessment of neurological status (RBANS), which provides a measure of attention, language, memory and visuospatial/constructive skills, i.e., overall cognitive skills with higher scores associated less cognitive impairment. Prior to the PET scans, participants were injected with two biomarkers; the neuroinflammation tracer, [11C]-PK11195, which was used to assess neurofilament-light (NF-L) levels, which is a measure of neuroaxonal damage, and later the tau tracer, [18F]-THK5317 that can assess for tau burden. The MRI scans were performed during PET scanning.

Findings

A total of 9 controls, 12 SRC and 6 TBI participants were recruited with a similar mean age (26 years) with 4 male patients in the control and TBI groups. Among the 12 SRC participants, 8 has been ice hockey players and the others were either footballers or Alpine skiers. Both the TBI and SRC groups had lower RBANS scores compared with controls, 75, 80 and 105.5, respectively (p < 0.05). Free tau levels were lowest in those with a TBI (reflecting greater aggregation) compared to controls and those with SRC (3.4 picog/ml, 4.0 picog/ml and 4.7 picog/ml, respectively). Similarly, the highest levels of NF-L (i.e., greater levels of neuroinflammation) were seen in those with TBI compared with controls and SRC (10, 6 and 8, respectively). 

Discussing these findings, the authors outlined how on a group levels, both young athletes and TBI patients had increased levels of tau aggregation and neuroinflammation, even though the imaging had occurred six months, and up to several years, after the last SRC or TBI. They authors suggested that this implied a persistent pathology and thought that the reduced free tau levels might be a consequence of decreased release from damaged neurons. 

They concluded that the presence of both increased tau aggregation and neuroinflammation among those with TBI and SRC implied a similar pathology, and that follow-up PET imaging was required to establish whether the observed changes persist over time and if such changes are associated with clinical symptoms.

Citation
Marklund N et al. Tau aggregation and increased neuroinflammation in athletes after sports-related concussions and in traumatic brain injury patients – A PET/ MR study. Neurolmage Clin 2021;30:102665. https://doi.org/10.1016/j.nicl.2021.102665 

International radiologist survey identifies need for AI training

With little known about radiologists’ views on the implementation of artificial intelligence (AI) and how this might impact on practice, an international survey sought answers on this important topic. 

A 2019 international survey of radiologists revealed a limited knowledge of artificial intelligence (AI) and a genuine fear that the technology would lead to their replacement in the coming years. This fear was in part, found to be driven by a lack of understanding of the role of AI with the result that few expressed a proactive attitude towards the technology. Having identified several factors, a team from the Department of Radiology, University Medical Center, Utrecht, The Netherlands, decided to expand upon their earlier findings and further explore the expectations among radiologists regarding the potential implementation of AI systems, possible barriers to adoption and the perceived need for AI education during their residency training. 

The team created a web-based survey that included 39 questions which sought to determine demographics, awareness and existing knowledge of AI, respondents’ expectations of the technology and any hurdles to implementation. The survey was piloted with ten radiologists and then translated into English, French, German, Spanish, Italian, Dutch, Czech, Russian and Turkish and distributed electronically through the Italian, French and Dutch radiology societies, as well as the European Society of Medical Imaging Informatics and via social media. 

Findings

A total of 1086 respondents from 54 countries, with a median age of 38 years (65% male) completed the survey. Most of the respondents (83%) were based in Europe although a small number came from Africa (1%), Asia (7%) and North America (6%). Among the respondents, the majority (66%) were radiologists and the remainder either fellows or residents. When asked whether AI would improve diagnostic radiology, the majority (89%) said maybe with only 10% believing that it would. Most respondents (89%) agreed that AI would help to improve diagnostic radiology and the majority (85%) also felt that AI would alter the future of radiologists. With respect to the expected role and benefits of AI in diagnostic radiology, the most frequently cited roles were as a second reader (78%) and workflow optimisation (77%). Interestingly, 47% reported that AI would serve as a partial replacement for radiologists with only 1% think that it would represent a complete replacement. 

The potential hurdles to implementation cited included ethical and legal issues (62%), lack of knowledge among relevant stakeholders (56%) and limitations due to digital infrastructure (35%). Additionally, both the high cost of AI software development and the cost of the software itself, were seen as barriers to implementation by 35% and 38% of respondents respectively. Most respondents (79%) also felt that AI education should be incorporated into residency training programmes and this was more likely among older radiologists, although only a minority (23%) thought that imaging informatics and AI should become a radiology subspeciality. In addition, three-quarters (75%) of respondents stated that they were planning on learning about AI.

In discussing these findings, the authors noted how the many (82%) respondents expected that AI would cause a significant change to the profession within ten years but on a positive note, most felt that AI systems could serve as a second reader and assist with workflow optimisation within departments. They concluded that the data suggested how there was broad support across the radiologist community for the incorporation of AI into residency programmes while, at the same time, recognising that legal/ethical issues together with digital infrastructure constraints were an overlooked challenge.

Citation
Huisman M et al. An international survey on AI in radiology in 1041 radiologists and radiology residents’ part 2: expectations, hurdles to implementation, and education. Eur Radiol 2021. https://doi.org/10.1007/s00330-021-07782-4 

Manganese enhances MRI imaging of viable tissue after myocardial infarction

After a myocardial infarction, assessing the extent of damage is essential but difficult. Now manganese-enhanced MRI offers an innovate approach to evaluate myocardial viability within one hour of an infarct.

The use of cardiac imaging has become an important tool in the assessment of heart disease although current imaging is unable to quantify one of the most important elements of patient morbidity, myocardial viability. Although gadolinium-enhanced magnetic resonance imaging is used to assess myocardial damage such as scar tissue, potential problems with its use include accumulation of chelated gadolinium within the infarct area and some evidence points to accumulation of the element in the brain. Alternatives such as PET scanning can be used to evaluate myocardial metabolism via the accumulation of the radioactive glucose analogue, [18F]-fluorodeoxyglucose within metabolically active cells, the analogue can also be taken up by immune cells present within the infarct. One factor that is highly sensitive to myocardial contraction is the metal calcium and contractility is regulated by changes in the levels of intracellular calcium. Unfortunately, intracellular calcium levels cannot be measured using non-invasive techniques. One solution is to use an alternative metal ion which is able to enter living cells using the same transport systems as calcium: such a metal ion is manganese, which is also used as an MRI contrast agent and for which levels can be quantified in vivo as a surrogate measure for calcium. In fact, studies have shown how manganese-enhanced MRI (MEMRI), using for example, the chloride salt, can be successfully used in cardiovascular MRI in humans. Nevertheless, once within cells manganese acts competitively with calcium, reducing myocyte contractility hence potentially limiting its role. While a chelated form, Mn-DPDP (manganese dipyridoxyl disphosphate) is approved for clinical imaging, chelation of manganese, while enhancing its safety profile, does reduce the extent to which it is desirable with respect to cardiac imaging. Moreover, studies have suggested that combining manganese with calcium gluconate has shown great promise for cardiac imaging.

For this study, a team from the Centre for Advanced Biomedical Imaging, University College, London, evaluated the real-time effects of manganese with or without the addition of the calcium gluconate on action potentials in vitro mouse and human cardiomyocytes and cardiac contractility in mice. 

Findings

Initially, the team evaluated whether manganese affected in vitro beating rates and action potentials in cardiomyocytes and cardiac contractility in mice. Addition of manganese chloride reduced cardiomyocyte beating rates. However, when supplementing the manganese chloride with calcium gluconate, beating was restored. These data indicated that the cardio-depressant effect of manganese chloride can be negated if co-administered with a calcium supplement. 

After inducing a myocardial infarction, the researchers investigated manganese uptake after the infarction. Quantitative T1 mapping-manganese-enhanced MRI revealed elevated and increased uptake of manganese in viable myocytes away from the area of the infarct. 

Commenting on their findings, the authors reported on how their data suggest that manganese-enhanced MRI offers an important new method for evaluating myocardial viability in as little as one hour after an infarction. Although high doses of manganese reduced myocardial contractility, the use of a calcium gluconate supplement reduced these effects, indicating that the co-use of these metal ions could be employed as MRI contrast agents.

The authors concluded that the use of a manganese-based contrast agent could potentially be used early after a myocardial infarction to evaluate the extent of remaining myocardial viability.

Citation
Jasmin NH et al. Myocardial viability imaging using manganese-enhanced MRI in the first hours after myocardial infarction. Adv Science 2021. https://doi.org/10.1002/advs.202003987

Real-time microscopic imaging allows for examination of flow characteristics during bioprinting

Bio ink flow dynamics can potentially damage cells during the process of bioprinting. Using a microscopy technique, researchers have directly observed ink flow to help identify conditions that could lead to cell damage.

Bioprinting enables the automated organising of living materials such as cells, layer by layer to create 3-dimensional (3D) structures such as organs and has a huge potential to revolutionise regenerative medicine. While there are several different approaches, one particular technique, extrusion-based 3D bioprinting has been widely adopted by the tissue engineering community, due to its great versatility and capacity to create numerous different tissues.

In extrusion-based bioprinting, the substance is delivered via a hydrogel and through a thin capillary tube with diameters between 50 mcm and 1 mm. Consequently, during the extrusion process, the cells are subjected to mechanical forces, especially shear stress and which can result in considerable damage or even death of cells. Moreover, these forces are likely to be worsen, specifically if the capillary diameter is very narrow and ultimately affect the viability and functionality of the resultant tissue. In an attempt to reduce these forces, several hydrogels exhibit what has been described as a ‘shear thinning’ effect, although this is not always successful. One possible approach to understanding the impact of the mechanical forces exerted on cells would be continuous imaging of the extrusion process to help again a better insight of the flow dynamics and cell movements. Furthermore, the development of continuous imaging could serve as a means for process quality control. 

In an effort to better understand the flow dynamics through the capillary tube, an international team led by the Department of Bioengineering, Imperial College, London, explored the use of light sheer fluorescence microscopy (LSFM) to quantify the real-time flow of cell-laden hydrogels through a capillary. The aim of their study was to provide quantitative information on the cell-hydrogel interplay in a capillary tube which served to mimic the portion of the extrusion bioprinting process in which cells were likely to be damaged. 

Findings

Using the LSFM the researchers were able to quantify the flow of cell-laden hydrogels through the capillary in real-time and the velocity of cell travel. This revealed how some cells appeared to roll on the surface of the capillary while others, not in contact with the capillary wall seemed to spin and those in the central portion spun faster. The LSFM essentially provided the team with information on the capillary viscosity and enabled a better understanding differences in cell viability. In addition, it was possible to image the hydrogel flow through the capillary, which indicated that the hydrogel separated into a solid and fluid phase with cells embedded in both phases. This created irregularly shaped solid phases suspended within the fluid phase and which, the authors felt, accounted for the variations in the calculated velocity measurements.

Cell survival was found to be dependent upon extrusion flow rates and cell viability was 2–2.5-times lower at higher flow rates.

In discussing their results, the authors indicated how the study had demonstrated the power of LSFM as a powerful imaging modality for the examination of flow dynamics through a capillary tube. Although this is the first study to explore the real-time imaging of capillary flow, the authors speculated that in the future, LSFM could be used to help with the design of the shape of capillaries to modulate cell viability during extrusion bioprinting.

Citation
Poologasundarampillai G et al. Real-time imaging and analysis of cell-hydrogel interplay within an extrusion-bioprinting capillary. Bioprinting 2021. https://doi.org/10.1016/j.bprint.2021.e00144 

Imaging modalities play a vital role in the assessment of patients with COVID-19

From the start of the COVID-19 pandemic, imaging modalities have proved to be of enormous importance in the diagnosis and management of patients. 

The importance of imaging modalities in helping to identify lung abnormalities in those infected with COVID-19 became apparent very early in the pandemic. Since that time, a good deal of information has emerged on the radiological manifestations of the virus. However, a multinational consensus statement from the Fleischer Society in April 2020 had proposed that imaging was not indicated as a screening tool among asymptomatic patients, no requirement for daily chest radiography in stable, intubated patients but that CT scans were needed for patients with functional impairment, hypoxaemia or both after recovery from the virus. 

In a review of the current state of knowledge of imaging use in COVID-19, a team from the Department of Radiology, University of Wisconsin, US, produced a comprehensive role of the clinical situations in which different imaging modalities have been used to help diagnose and offer advice on the management of patients infected with COVID-19. 

One of the earliest reported uses of imaging, and a major focus of the review, were chest radiographs, although as the authors noted, CT chest imaging can be normal in up to 56% of patients within two days of symptom onset, indicating that a normal CT finding does not reliably exclude the disease. Other early findings can include either unilateral or bilateral lung opacities, often with a basilar and strikingly peripheral distribution. Some of the earliest work also revealed the presence of bilateral lower zone consolidation that peaked at 10 to 12 days after symptom onset. A further valuable role for CT imaging is the ability to differentiate between patients with more severe disease. In one study of 189 patients, it was found that using a cut-off of 23% of lung involvement showed a 96% sensitivity and specificity for distinguishing critically ill patients. In addition, it has been determined from a meta-analysis of studies that the pooled sensitivity for detection of COVID-19 for CT was 94% but the specificity was only 34%. In a Cochrane review of thoracic imaging tests to diagnose COVID-19, it was found that when testing patients with known infection, chest CT was correct in 86% of cases, chest X-rays in 82% of cases, and lung ultrasound in 100% of patients. The use of artificial intelligence systems has also proved to be of value in identifying COVID-19 pneumonia with one large study in 3777 patients, finding a sensitivity of 93% and a specificity of 86%. 

But imaging techniques such as CT pulmonary angiography have been successfully used to identify pulmonary embolism.

In addition, the use of MRI in patients recovering from COVID-19 have helped to identify abnormalities such as lowered ejection fraction, higher left ventricular volumes and pericardial enhancement. Moreover, abdominal CT imaging has revealed colorectal and small-bowel wall thickening, fluid-filled colon and infarction of the kidney, spleen and liver. Neuroimaging has revealed how patients with COVID-19 have various abnormalities including ischaemic and haemorrhagic stroke, encephalomyelitis and widespread white matter hyperintensities. Whether these changes are as a direct result of the virus, or a consequence of infection are unclear, although a retrospective study of brain MRI findings revealed a range of abnormalities. 

The authors concluded that while the role of imaging in diagnosis and management had greatly increased during the pandemic, they did ponder the question of whether imaging could reduce hospital admissions and wondered how the role of imaging might change with the onset of respiratory illnesses during the winter months.

Citation
Kanne JP et al. COVID-19 Imaging: What we know now and what remains unknown. Radiology 2021 doi: 10.1148/radiol.2021204522

CTPA detection of pulmonary embolism guided by D-dimer levels in patients with COVID-19

CT pulmonary angiography (CTPA) is the preferred imaging modality to detect pulmonary embolism (PE) but whether D-dimer levels can guide selection for CTPA is uncertain.

Emerging evidence has indicated a high incidence of thromboembolic events, including pulmonary embolism (PE) in patients with COVID-19. Moreover, computed tomography pulmonary angiography (CTPA) is the current and preferred standard of care form of imaging to detect a PE. Nevertheless, the true incidence of PE in patients with COVID-19 remains to be determined and it is therefore unclear which patients should be referred for CTPA for diagnostic confirmation. While it has been suggested that the threshold for CTPA should be lowered, and based on grossly elevated D-dimer levels, the overall value of this approach requires further clarification. 

Given this uncertainty, researchers from the Department of Radiology, Zuyderland Medical Centre, Geleen, The Netherlands, undertook a meta-analysis of the frequency of PE in patients with COVID-19 to determine whether D-dimer levels served as a useful guide to the selection of patients for CTPA. Using both MEDLINE and Embase, the researchers sought to identify studies which reported on the frequency of PE on CTPA scans in at least ten patients. Furthermore, the team manually searched for relevant articles in the journal Radiology: Cardiothoracic Imaging, which is not available via either MEDLINE or Embase. For identified studies, the researchers extracted data on the country of origin, location for testing, i.e., emergency department, general ward or intensive care unit, patient inclusion criteria, indications for CTPA (and who interpreted the results of the imaging) together with the location of the PE (i.e., main, lobar, segmental and subsegmental pulmonary arteries). In addition, mean values of D-dimer levels were also extracted and the authors contacted for these data if they were not in the published article. 

Findings

A total of 71 studies were included in the meta-analysis. The overall frequency of PE in all studies among those with COVID-19 was 32.1% (95% CI 28.5–35.9%). PE was more common in peripheral than in main arteries, with pooled frequencies of 65.3% compared with 32.9%, which suggested that a local thrombosis was a major factor. Furthermore, the pooled frequencies of PE in patients with COVID-19 was lowest at the emergency department, followed by general wards and intensive care units: 17.9%, 23.9% and 48.6%, respectively. In 55 (77.5%) of the studies, patient selection for CTPA was reported and CTPA interpreters were blinded to clinical information in 15 (21.1%) of studies, although in the majority (76.1%) of studies, it was unclear whether interpreters were blinded to the clinical data. Among two studies where CTPA was used routinely (and without a clinical suspicion of PE), the frequency of PE was 2.1% and 5.7%. However, in two other studies where CTPA was routinely performed within the intensive care unit, again regardless of clinical suspicion, the reported PE frequencies were 47.2% and 60%. 

Patients with COVID-19 and PE had significantly higher D-dimer levels than those without a PE and cut-off levels for D-dimer to identify those with a PE varied from 1000 to 4800mcg/l. 

Commenting on their findings, the authors reported that since the reported incidence of PE was highest within the intensive care setting, it is likely that the condition is associated with more severe disease. In addition, most studies indicated that the criteria for CTPA were generally recorded as a clinically suspected PE. Furthermore, the presence of elevated D-dimer levels was considerably higher than the conventional cut-off value of 500mcg/l which is used to screen the general public for venous thromboembolism. They concluded that a D-dimer level of 1000mg/l might serve as an important guide to the selection of patients for CTPA.

Citation
Kwee RM et al. Pulmonary embolism in patients with COVID-19 and value of D-dimer assessment: a meta-analysis. Eur Radiol 2021;1–19.  

Deep learning ultrasound algorithm predicted prognosis of COVID-19 as well as clinicians

A deep learning algorithm for lung ultrasound with the ability to identify patients with COVID-19 at high risk of clinical worsening showed good agreement with the view of clinicians.

Although the diagnostic assessment of patients with COVID-19 is undertaken with a PCR test, diagnostic imaging using computed tomography (CT) has a reported sensitivity and specificity ranging between 61% and 99% and 25% and 33% respectively. However, because
CT imaging is not portable, other solutions are required. One such alternative and portable imaging modality is lung ultrasound (LUS). The technique provides real-time imaging and has the benefit of portability and is widely available. Moreover, LUS which can be used to identify changes in the physical state of superficial lung tissue and may be of potential value in the assessment of patients with COVID-19. However, LUS is generally restricted to visual inspection and interpretation of imaging artefacts and is thus qualitative and subjective although quantitative scoring systems have been proposed. In recent years, deep learning (DL) algorithms using automatic scoring and semantic segmentation have been developed
to classify each LUS frame. 

Whether a deep learning algorithm could be used to evaluate LUS videos and provide a score as well as semantic segmentation for each frame that was of prognostic value in patients with COVID-19 was the subject of
a study by a team from the Diagnostic and Interventional Ultrasound Unit, Valle del Serchio Hospital, Lucca, Italy. The team are the first to report on the development of a standardised imaging protocol and scoring system and which utilised a DL algorithm that was able to evaluate LUS videos and which provided, for each frame, a score as well as semantic segmentation. The team then sought to evaluate the prognostic value of this approach by comparing the level of agreement between the output from the DL and the interpretation from expert clinicians.

All patients were examined using LUS and according to a standardised acquisition protocol that involved 14 scanning areas. All videos acquired by the scans were independently evaluated by two clinicians and who assigned a score ranging from 0 to 3 for each video. This scoring system has been described previously such that a score of 0 = high reflectivity of the normal aerated lung surface and a score of 3 = a pleural line that is highly irregular and cobbled. The acquired videos were also fed into the DL algorithm. 

Findings

The team analysed data from 82 patients (43 male) with a mean age of 61.1 years, all of whom had a PCR confirmed diagnosis of COVID-19. A total of 1488 LUS examinations were performed (note that some patients were scanned multiple times) which generated 314,879 frames. When comparing the level of agreement between the DL system and the clinical experts, the resulted showed a percentage agreement of 85.96% in the stratification between patients at a high risk of clinical worsening of COVID-19 and patients at low risk. Despite this high level of agreement, there were instances where the DL misclassified scores. For example, in 14% of cases the DL misclassified a score of 3 as 2. 

In a discussion of their findings, the authors stressed that for LUS to be a reliable means of patient evaluation, a standardised protocol is required. They concluded that the results were encouraging and demonstrate the potential value of using DL models for the automated scoring of LUS and stratification of the risk of disease progression in those with COVID-19.

Citation
Mento F et al. Deep learning applied to lung ultrasound videos for scoring COVID-19 patients: A multicenter study. J Accoust Soc Am 2021;149(5):3628–34. https://asa.scitation.org/doi/10.1121/10.0004855

Guideline summary: Vetting (triaging) and cancellation of inappropriate radiology requests

Radiologist review of imaging requests prevents unnecessary exposure to radiation, inappropriate and duplicate examinations and makes the overall delivery of services both safer and more efficient. Recent RCR guidance advises on setting up efficient processes for vetting and communication.

An often under-recognised role of radiologists is that of reviewing imaging requests. Such input can lead to the avoidance of inappropriate requests, unnecessary radiation exposure for patients and the potential to avoid duplication of examinations, which increases the workload burden of radiology departments. Moreover, communication with the original referrer as to why a request has been declined provides an opportunity for informative feedback and hence the effectiveness of any vetting procedure is reliant on the establishment of a robust communication network. 

Because radiologists are themselves qualified medical practitioners, they often have a good understanding of the appropriate imaging modalities required for specific conditions and in particular age groups, after a consideration of any prior tests (both radiological and non-radiological). Thus, there is much to be gained through incorporation of radiologists as members of the multi-disciplinary team (MDT), to enable alignment of investigations and preferences for the mode of imaging. A further and relevant benefit of radiologist involvement with MDTs is that it can serve to improve the overall efficiency, cost-effectiveness, safety and delivery of radiology services. 

The role of radiologists is becoming more patient-focused, and this is likely to be expanded in the years to come with the development of rapid access diagnostic centres and one-stop imaging/biopsy/clinical pathways, especially where diagnostics become a first step within many patient pathways. 

Staff involved in vetting requests

The report includes a section discussing the vetting process and who should be involved. While an important aspect of their work, the vetting process undertaken by radiologists is often not fully recognised as a clinical task and unfortunately there is no national benchmarking of vetting activity. The report suggests that vetting should be used as a benchmark to ensure that it supports the specific recommendations for radiologists detailed in the National Health Service (NHS) document, Choosing Wisely (https://www.choosingwisely.co.uk/). 

The Royal College report suggests that modality-based radiographers, using appropriate protocols and with suitable training, can undertake vetting of requests for computer tomography, magnetic resonance imaging and ultrasound. Such individuals are likely to be much more comfortable cancelling, for example, a duplicate request, but given that radiographers are not medically qualified, they are less likely to refuse a test on clinical grounds. Under such circumstances, the report advises that requests for more specialist or complex imaging, are best left to a radiologist or special interest radiologist, to ensure that the request is appropriate. In addition, where a radiographer has any concerns or is uncertain about whether to decline a test, the request should be forwarded to the radiologist. Thus, good communication within the radiology department is a prerequisite to ensuring that an appropriate protocolling and vetting procedure is introduced.

Technology requirements for vetting

On a practical level, the report recommends that an effective vetting process is carried out using the radiology information system (RIS), which is the most commonly used system in the NHS. However, while the structure of systems may differ across the NHS due to the presence of different vendors, the report defines the process which should be available to radiology departments, despite the presence of different vendors. Using RIS, it should be possible to communicate the reason for cancelling a scan and in making such decisions, radiologists should have access to the full local imaging history and should be able to have their vetting workload recognised as an activity.

In an appendix to the document, there is a RIS specification for the vetting and protocolling workflow.

The full document can be found here:

www.rcr.ac.uk/system/files/publication/field_publication_files/bfcr214-vetting-triaging-cancellation-inappropriate-radiology-requests.pdf

Guideline summary: Integrating AI with the radiology reporting workflows

This guidance from the Royal College of Radiologists sets out the standards that a department should meet when integrating artificial intelligence into already established systems, producing a safe seamless system with the patients at the centre.

The fast pace of developments in artificial intelligence (AI) means that the technology will have an important role to play in many clinical specialities, including radiology and will change, hopefully in a positive direction, the way in which patient care is delivered. 

AI platforms and algorithms are designed to work in collaboration with existing technologies and have a wide range of potential uses in radiology. For example, in magnetic resonance imaging (MRI), an AI algorithm can detect multiple sclerosis, strokes, brain bleeds etc. Within the arena of computed tomography (CT), AI systems are designed to detect skull fractures, brain haemorrhages, infarcts, and tumours. In body CT, the introduction of AI has a role in mammography, allowing for the detection of both suspicious lesions and calcification. 

Once an image has been captured by the radiographer, the AI will perform a ‘pre-analysis’ of the image, and, if an abnormality is detected, the system will query and retrieve a prior similar image from the picture archiving and communication system (PACS) for comparative analytical purposes. A further advantage of using an AI system, is ‘computer-assisted triage’, which helps with the prioritisation of reporting worklists once an abnormality has been detected. 

Nevertheless, the RCR report emphasises the importance of radiologists acknowledging the limitations of an AI system report, i.e., its sensitivity and specificity, and what these figures mean in the context of the specific pathology. In other words, the AI system is simply a supportive tool and radiologists should not become overly reliant upon on the AI findings and assume that these findings will be 100% accurate all the time. 

The overarching aim of the RCR report is firstly to ensure that any innovations in AI are fully integrated into existing reporting systems and secondly, to define the necessary standards required to enable radiology service providers to facilitate this integration without creating additional burden for staff. 

The report does not make any specific recommendations about which AI system should be purchased, or any ethical considerations related to its use, and finally discusses the issue of AI solutions for workflow and radiology management efficiency. The report is directed more towards defining the parameters within which an AI platform should operate.

Standards

The report begins with a series of standards for the use of AI systems.

  1. AI must be integrated seamlessly with existing radiology information systems (RIS) and PACS without creating an additional burden for radiologists.
  2. The accuracy of the algorithms must be clearly declared to both the radiologist and others involved in patient management.
  3. The AI finding should be communicated to the RIS and PACS through existing and global technical standards.
  4. The department workflow should be sufficiently robust to ensure that the analysis is complete and available on PACS before being viewed and interpreted by a human.

An important element of the report is the necessity to ensure that all instrumentation, e.g., scanners, RIS, PACS and the AI platform,
all work cooperatively within the radiology department. It is also necessary that the AI platform only begins the analysis once the radiographer has completed the examination and has sent the information to the AI system, i.e., that the imaging should be ‘pre-analysed’ before reaching the PACS for displayed.

General standards for data output

Any AI platform adopted should have standard output, and which must include:

  • Graphical representation of the region of interest (of the detected abnormalities) or mark-up/pointers using global technical standards (DICOM) so that images can be viewed in the PACS viewers.
  • AI detected abnormalities should be output as text e.g., fracture, infarct etc.
  • A notification that the image analysis has been completed.
  • Some AI alerts may be defined as critical within the system and should be pre-specified by the radiologist.
  • A declaration or disclaimer should be sent out including the list of any abnormalities which were evaluated by the AI system. This might, for example, include a CT scan that detected a brain haemorrhage. It is also necessary to include the sensitivity and specificity (or true/false positives or negatives) of the applied algorithm for each of the abnormality being evaluated.

With the RIS, it will be necessary to incorporate additional data fields which capture AI abnormalities and any alerts.

The report concludes on a positive note saying that: “AI image pre-analysis is likely to have a very positive impact on radiologists’ future working lives if properly integrated into the reporting workflow”.

A copy of the full guidance can be viewed below 

www.rcr.ac.uk/publication/integrating-artificial-intelligence-radiology-reporting-workflows-ris-and-pacs

x