This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Healthcare Europe

Press Releases

Take a look at a selection of our recent media coverage:

Co-administration of influenza and COVID-19 vaccines shown to be effective

17th June 2021

Whether co-administration of a COVID-19 and influenza vaccine leads to immune interference is unclear but possibly relevant for future vaccination programmes.

The international rollout of the COVID-19 vaccination programme is starting to break the link between infection and severe illness and hospitalisation. Whether or not an annual booster COVID-19 vaccination will be required in the future is yet to be determined. Nevertheless, in some countries, it is possible that the COVID-19 vaccination schedule could overlap with the influenza season, with a potential for an overlap in vaccine administration. As the Phase III COVID-19 vaccination trials excluded those who had a recent or planned receipt of another vaccine, there is a lack of data on the impact of co-vaccination. This led a team from the Novavax Institute, Gaithersburg, USA and St George’s University Hospital, London, UK, to consider the effect of simultaneous administration of the first dose of the NVX-CoV2373 (Novavax COVID-19 vaccine) and an influenza vaccination, in a subgroup of patients included in the Phase III efficacy trial of NVX-CoV2373. Subgroup patients were required to be in good health and not already received an influenza vaccination or any other live vaccine within 4 weeks. These individuals were randomised to receive a concomitant dose of influenza with their first NVX-CoV2373 dose or influenza and placebo. Although the main study was observer-blind, the influenza vaccine was administered in an open-label manner and reactogenicity was evaluated using an electronic diary for 7 days post-vaccination and the team assessed the antibody titres to both influenza and COVID-19 after 21 days.

Although the main trial recruited over 15,000 participants, only 431 were randomised to influenza vaccine or placebo. A total of 217 participants with a mean age of 39 years (43.3% female) and 75.1% of White ethnicity received the seasonal influenza vaccine and 214 received the influenza vaccine and placebo. Reactogenicity was more common in the co-vaccinated group compared to NVX-CoV2373 alone, with 70.1% versus 57.6% reporting tenderness, pain at the injection site (39.7% vs 29.3%), fatigue (27.7% vs 19.4%) and muscle pain (28.3% vs 21.4%). Although the influenza vaccine response was satisfactory, the COVID-19 vaccine efficacy was 87.5% in the co-vaccinated subgroup compared with 89.8% in the main group. The rates of adverse effects were low and balanced between those given NVX-CoV2373, influenza vaccine or both.

Commenting on these findings, the authors noted how this was the first direct evidence that co-vaccination still resulted in acceptable vaccine efficacy. While there was an increase in the reported incidence of local reactogenicity in the co-vaccinated group, symptoms were generally mild in severity. They concluded that the study had generated to early safety concerns over co-vaccination.

Toback S et al. Safety, immunogenicity and efficacy of a COVID-19 vaccine (NVX-CoV2373) co-administered with seasonal influenza vaccines. MedRxiv 2021

Saliva COVID-19 testing more accurate than NPS with an additional process step

16th June 2021

Saliva testing for COVID-19 is not routinely used but can be more accurate than the preferred method if samples are processed correctly.

The use of an optimal sample type and an easy collection and processing method is a primary requirement for the detection of COVID-19. Although various testing media such as bronchoalveolar lavage, sputum, saliva, faeces, blood and nasopharyngeal swabs (NPS) have been used to detect for the presence of COVID-19, NPS have become the preferred method. Although the collection of saliva samples is less intrusive than NPS, the presence of mucus or even blood within saliva samples, renders it more difficult to process in a laboratory. A further problem with using saliva samples is the apparent inconsistencies in the sensitivity of detection, with some studies showing saliva to be more sensitive than NPS whereas in other studies, saliva samples were less sensitive. In trying to improve upon the current sample preparation method, a team from the Department of Pathology and Medicine, Medical College, Georgia, US, developed an additional processing step using a homogeniser. The study was performed in two stages. Firstly, using matched NPS and saliva samples (i.e., paired samples) from community and healthcare settings, the team analysed the paired samples using conventional PCR methods. In the second phase, again using matched sample pairs, the samples were analysed but this time, prior to RNA extraction for PCR testing, the saliva samples were processed with an additional homogeniser step. In addition, 85 saliva samples were independently tested using both methods.

In the first phase, from 240 samples analysed, 28.3% (68/240) were found to be positive from both saliva and NPS. However, the detection rate for COVID-19 was significantly higher in NPS samples compared with saliva (89.7% vs 50%, p < 0.001). In the second phase, with the additional homogeniser step, the detection rate for saliva samples was significantly higher than NPS samples (97.8% vs 78.9%, p < 0.001). When testing the 85 saliva samples with both methods, the detection rate was 100% using the homogeniser protocol but only 36.7% using the original method.

In a discussion of their findings, the authors noted how the homogeniser sampling step, by reducing the viscosity of saliva samples, made both sample handling and extraction much easier. They concluded that much of the variation in sensitivity found in the literature for saliva testing was likely due to the method of sample preparation. Despite the fact that the additional step would increase with laboratory workload, the introduction of this additional step, proved to be a more sensitive method of detection for COVID-19.

Sahajpal N et al. Clinical Validation of a Sensitive Test for Saliva collected in Healthcare and Community Settings with Pooling Utility for Severe Acute Respiratory Syndrome Coronavirus 2 Mass Surveillance. J Mol Diagn 2021

Study shows physicians’ reluctance to use machine-learning for prostate cancer treatment planning

15th June 2021

A study shows that a machine-learning generated treatment plan for patients with prostate cancer, while accurate, was less likely to be used by physicians in practice.

Advancements in machine-learning (ML) algorithms in medicine have demonstrated that such systems can be as accurate as humans. However, few systems have been used in routine clinical practice and often ML systems tested in parallel with physicians and actions suggested by the system not acted upon in practice. To fully utilise ML systems in routine clinical care requires a shift from its current adjunctive support role, to being considered as the primary option. In trying to assess the real-world value of an ML algorithm, a team from the Princess Margaret Cancer Centre, Ontario, Canada, decided to explore the value of ML-generated curative-intent radiation therapy (RT) treatment planning for patients with prostate cancer. The team’s overall aim was to evaluate the integration of the ML system as a standard of care and undertook a two-stage study comprising an initial feasibility to clinical deployment. For the initial validation phase, the team included data from 50 patients to assess the ML performance retrospectively. The researchers delivered ML-generated RT plans and asked reviewers to assess these plans (in a blinded fashion) with the actual plans used for the patient. In the subsequent deployment phase, again with 50 patients, both physician generated and ML generated were prospectively compared, again with the treating physician blinded to the source of the plan.

The ML system proved to be much faster at generating plans than the equivalent human-driven process (median 47 vs 118 hours, p < 0.01). Overall, ML-generated plans were deemed to be clinically acceptable for treatment in 89% of cases across both the validation and deployment phase (92% duration the validation phase and 86% during the deployment phase). In only 10 cases, the ML-generated method was deemed not applicable because the plans required consultation with the treating physician, thus unblinding the review process. In addition, 72% of ML-generated RT plans were selected over human-generated RT plans in a head-to-head comparison. However, when compared to the simulation and the deployment phase, the proportion of ML-generated plans used by the treating physician actually reduced from 83% to 61% (p = 0.02).

The authors were unable to fully account for these differences and suggested that either retrospective or simulated studies cannot fully recapitulate the factors influencing clinical-decision-making when patient care is at stake and concluded that further prospective deployment studies are required to validate the impact of ML in real-world clinical settings to fully quantify the value of such methods.

McIntosh C et al. Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer. Nat Med 2021

No subsequent COVID-19 infections detected after mass-screening at indoor event

The mass-screening of individuals attending an indoor event without social distancing, could pave the way for the reactivation of cultural events.

Mass gatherings are potentially super-spreading events for viruses and have been cancelled in most countries to reduce transmission of COVID-19. A major problem during a mass gathering event is the difficulty identifying those who are infectious. Moreover, given that PCR testing is laboratory-based and therefore has a long turnaround time, mass testing of attendees to any such event becomes impossible. However, the use of point-of-care antigen-detecting rapid diagnostic testing (Ag-RDT) could provide a workable solution. Since this tool has not been tested under controlled conditions, a team from Barcelona, Spain, performed a randomised trial to test the effectiveness of a prevention strategy during a live indoor music event. The assumption of the study was that point-of-care mass-screening for infected individuals and regular preventative measures would reduce the risk of transmission of COVID-19. For the present study, adults (18 years and over) were recruited via social media and those willing to participate were tested with samples from nasopharyngeal swab, using the Ag-RDT approximately 12 hours before the music event. Those with a negative test were then randomised to either attendance or return to normal life (i.e., the control group). In addition, a transcription-mediated amplification (TMA) test was also performed and reported to participant’s smartphones, after 24 – 48 hours. During the event, participants had their temperature checked at entry to the venue and were given an N95 mask which had to be worn throughout the event but could be removed in the bar area for drinking. The primary outcome was the difference in incidence of PCR confirmed COVID-19 infections between the control and intervention (i.e., event attendees), 8 days after the event.

A total of 960 individuals who expressed an interest in attending the event tested negative and were included in the final analysis; 495 were allocated to the intervention and 465 the control group. For the group as a whole, the mean age was 33.6 years (82% male). Despite a negative Ag-RDT test at baseline, 28 individuals had a positive TMA result although subsequent PCR testing confirmed only 2 positive results (both in the control group). Interestingly, none of the music event attendees developed a positive PCR test on day 8 and only 2 in the control group, one of whom had been diagnosed with COVID-19, four days after randomisation.

In their discussion, the authors noted that even though participants at the event were allowed to sing and dance (albeit wearing a mask), even without social distancing, none developed COVID-19. Their study was the first to provide evidence on the safety of indoor events, provided that some mitigation strategies (i.e., mask wearing and mass-screening) were deployed. They concluded that their findings should pave the way for reactivation of cultural activities.

Revollo B et al. Same-day SARS-CoV-2 antigen test screening in an indoor mass-gathering live music event: a randomised controlled trial. Lancet 2021

Greater fruit intake associated with lower risk of diabetes

14th June 2021

Eating more fruit reduces the risk of developing diabetes but little is known about how fruits confer this advantage.

Type 2 diabetes affects a huge number of individuals with a 2017 review estimating that globally, up to 451 million adults had the condition and that this figure was set to rise to 693 million by 2045. There is evidence that consumption of whole fruits but not juice, reduces the risk of developing diabetes. Nevertheless, what is less clear is the relationship between fruit intake and measures of insulin resistance and beta-cell dysfunction. Using data from the AusDiab cohort, researchers from the School of Medical and Health Sciences, Perth, Australia, set out to examine the relationship between intake of different fruits and measures of glucose tolerance, insulin sensitivity and the incidence of diabetes after several years follow-up. The AusDiab cohort is a national population-based survey of diabetes prevalence and associated risk factors in Australian adults. Participants were recruited in 2000 and followed-up in 2004/05 and 2011/12. The exposure of interest was total fruit intake, the type of fruit eaten and the amount of fruit juice. The primary study outcomes included measures of fasting plasma glucose (FPG), 2-hour post-load glucose (PLG) and HOMA2, which is a measure of insulin sensitivity (or insulin resistance). The team collected baseline demographic data as well as levels of physical activity and smoking status.

The cohort included 7674 individuals with a mean age of 54 years (45% male). The median level of fruit intake was 162g/day with the most common fruits being apples (23% of total), bananas (20%) and oranges/other citrus fruits (18%). All other fruits contributed to less than 8% of the total. The total fruit intake was significantly and inversely associated with HOMA2 values but not with either PLG or FPG. When comparing those with the highest versus lowest total intake of whole fruits and after adjustment for demographic factors, there was 36% lower odds of having diabetes after 5 years (odds ratio, OR = 0.64, 95% CI 0.44 – 0.92) among 4674 individuals for whom data was available at this point in time. However, there were no significant associations for any particular fruits or with consumption of fruit juice.

In their discussion, the authors suggested that the potential benefits of greater consumption of fruit were related to the presence of soluble fibres that were metabolised by the gut microbiome and the release of short-chain fatty acids which are known to modulate glucose metabolism. It was also possible that the reduced risk could be attributed to the presence of flavonoids within fruits which have anti-diabetic properties. The authors concluded that eating whole fruits appeared to preserve insulin sensitivity and mitigate type 2 diabetes.

Bondonno NP et al. Associations between fruit intake and risk of diabetes in the AusDiab cohort. J Clin Endocrinol Metab 2021

CGM associated with better control in type 2 diabetics receiving basal insulin

Whether continuous glucose monitoring improves outcomes for type 2 diabetics compared with self-testing is uncertain.

The monitoring of blood glucose is paramount to the safe and effective management of all diabetic patients. Typically, insulin regimes can be basal only (i.e., long-acting agents used once or twice daily) or a combination of basal and prandial, i.e., rapid-acting agents used to control the mead-induced glucose spikes. Moreover, assessment of blood glucose levels is achieved through the use of either testing strips or real-time continuous glucose monitoring (CGM). However, in practice self-testing has been shown to be under-utilised and while the latter has been shown to improve diabetic control in type 2 diabetes using a combined insulin regime, little is known about the effectiveness of CGM in patients with less intensive insulin regimes. Therefore, a team of researchers from the International Diabetes Centre, Minneapolis, US, performed a randomised controlled trial to determine the effectiveness of CGM in primary care adults with type 2 diabetes using only basal insulin compared with the use of traditional blood glucose monitoring (BGM). Included patients had a baseline HbA1c level of 7.8% to 11.5%, self-reported BGM monitoring of at least 3 or more times per week and possession of a smartphone compatible with the CGM device for uploading data. The primary outcome measure was the HbA1c level after 8 months and key secondary outcomes were CGM-measured time in the target glucose range (70–180mg/dl) and the time with glucose levels above 250mg/dl.

A total of 175 participants with a mean age of 57 years (50% women) and with a mean HbA1c level of 9.1% were randomised in a 2:1 fashion to CGM or BGM. After 8 months, the mean HbA1c reduced to 8.0% in the CGM group and to 8.4% in the BGM group (p = 0.02). In the GCM group, the mean percentage of time in the target glucose range was 59% compared to 43% in the BGM group (p < 0.001). Similarly, there was a significantly lower time where glucose levels exceeded 250mg/dl (11% vs 27%, CGM vs BMG, p < 0.01).

In discussing their findings, the authors noted that the greater improvements seen in HbA1c in the CGM group were due to an increased period of time for which glucose levels remained with the target range. Nevertheless, a limitation recognised by the authors was the use of diabetic specialists, which is not standard practice in primary care and that this may have limited the generalisability of their findings. Despite this, they concluded that the use of CGM resulted in superior diabetic control compared with self-monitoring.

Martens T et al. Effect of Continuous Glucose Monitoring on Glycemic Control in Patients with Type 2 Diabetes Treated with Basal Insulin. A Randomised Clinical Trial JAMA 2021

MicroRNAs can distinguish between myocarditis and myocardial infarction

11th June 2021

Myocarditis can mimic myocardial infarction but a unique biomarker provides clinicians an opportunity to distinguish the two conditions.

Acute myocarditis (aMC) has many different causes but the prevalence is unclear because the condition has similar clinical symptoms to an acute myocardial infarction (aMI). Although the diagnosis of myocarditis can be confirmed with cardiac magnetic resonance imaging, this technique is not always available. However, one approach to resolve the diagnosis involves the use of microRNAs (miRNAs), which are small, non-coding RNAs that play an important role in gene expression. Several miRNAs have been identified in the infarcted heart and this led a team from the Vascular Pathophysiology Area, Madrid, to try and identify a unique miRNA which could be used to distinguish between myocarditis and myocardial infarction. The team focused on circulating T cells, in particular T helper 17 (Th17) cells, which were confirmed as being a characteristic of myocardial injury in the acute phase of myocarditis. They performed a miRNA microarray analysis and quantitative polymerase chain reaction (qPCR) assays in Th17 cells after experimentally inducing myocarditis and myocardial infarction in mice to identify unique biomarkers.

The researchers identified the miRNA, mmu-miR-721, produced by Th17 cells in mice which was only produced in response to either autoimmune or viral myocarditis and which was absent from those with aMI. Using four patient cohorts with myocarditis, they subsequently identified a human homologue to mmu-miR-721, termed has-miR-Chr8:96. The researchers found that plasma levels of has-miR-Chr8:96 were considerably higher among myocarditis patients compared to both those with a myocardial infarction and in healthy controls. The area under the receiver-operating characteristics curve for has-miR-Chr8:96 was 0.927 (i.e., 92.7%) for distinguishing between aMC and aMI and this diagnostic value was retained even after adjusting for age, ejection fraction, and serum troponin levels.

Although the authors accepted that more work is needed to validate this biomarker in other cardiac disorders such as dilated cardiomyopathy, their preliminary findings suggest that raised plasma levels of has-miR-Chr8:96 are unique to those with myocarditis and have sufficient discriminatory power from myocardial infarction.

Blanco-Dominguez R et al. A Novel Circulating MicroRNA for the Detection of Acute Myocarditis. N Engl J Med 2021;384:2014-27

Cosentyx receives FDA approval for use in children and adolescents

Manufacturer Novartis has been granted approval for Cosentyx (secukinumab) use in children from 6 years of age with moderate-to-severe psoriasis

Psoriasis is a chronic, inflammatory conditions that affects 1% of children and adolescents in the US. Moreover, due to the visible nature of the condition, psoriasis can have a negative impact on children’s quality of life.

The interleukin-17A (IL-17A) inhibitor, secukinumab (brand name Cosentyx) can now be used for the treatment of moderate-to-severe psoriasis in children from the age of six who are candidates for systemic or phototherapy because of the severity of their psoriasis. The drug has more than 14 years of clinical experience and long-term, 5-year clinical data. The approved dosing is 75 or 150mg, depending on the child’s weight (i.e., 75mg for those < 50kg and 150mg > 50kg) and the drug is administered by subcutaneous injection every four weeks after an initial loading regime. A further advantage is that after suitable counselling, the dose can be given by an adult carer, hence avoiding the need to visit a healthcare professional.

The approval of Cosentyx was based on the results of two Phase III trials that were undertaken in children aged between 6 and 18 years. The first trial was a 52-week, randomised, double-blind, placebo controlled study with 162 children with severe plaque psoriasis. The study had a co-primary endpoint: the proportion achieving a psoriasis area severity index (PASI) 75 score (i.e., a 75% improvement in disease severity) and an investigator’s global assessment of either “clear” (no psoriasis) or “almost clear” at week 12.

Among children <50kg, after 12 weeks, 55% vs 10% (Cosentyx vs placebo) had achieved a PASI 75. Similarly in those weighing >50kg, the corresponding PASI 75 values were 86% vs 19% (Cosentyx vs placebo). For children weighing <50kg, the proportion achieving a score of clear or almost clear was 32% vs 5% (Cosentyx vs placebo). Similarly, among children weighing > 50kg, the corresponding values were 81% vs 5%.

The second trial was designed to assess safety although the press release contains no data from this study and at present, neither study has been published.

Discussing the new approval, Randy Beranek, President and CEO of the National Psoriasis Foundation, said “Having expanded treatment options for this patient population is a step in the right direction to help reduce the burden of plaque psoriasis“.

FDA grants aducanumab approval for Alzheimer’s disease

10th June 2021

The US Food and Drug Administration (FDA) has granted accelerated approval for the monoclonal antibody, aducanumab, a first-in-class drug, for treatment for Alzheimer’s disease.

Alzheimer’s disease is an irreversible, progressive brain disorder that slowly destroys memory and thinking skills and ultimately, the ability to carry out simple tasks. The precise cause of the disease is still not fully clear but a defining feature in the brain of sufferers is an accumulation of amyloid beta plaques and neurofibrillary, or tau, tangles which result in loss of neurons and their connections. Aducanumab (Aduhelm) works by targeting the aggregated soluble and insoluble amyloid beta plaques.

The efficacy of Aduhelm has been evaluated in three separate studies with a total of 3078 patients and which have been described in the manufacturer’s prescribing information leaflet. The dosage is 10mg/kg and the drug is administered over one hour every 4 weeks and available at two different strengths, 170 mg and 300mg.

In clinical studies, the effect of Aduhelm on amyloid plaques was assessed in the trials using positron emission tomography (PET) and resulted in significant reductions in plaques after 26 and 78 weeks of treatment. Writing on the FDA site, Dr Patrizia Cavazzoni, Director, FDA Center for Drug Evaluation and Research noted that “the data included in the applicant’s submission were highly complex and left residual uncertainties regarding clinical benefit.” Nonetheless, she also added that “the Agency concluded that the benefits of Aduhelm for patients with Alzheimer’s disease outweighed the risks of the therapy.”

Additionally, the FDA is requiring the manufacturer, Biogen, to conduct a post-approval clinical trial to verify the drug’s clinical benefit. If the drug does not work as intended, then the FDA can take steps to remove it from the market.

Monitoring adalimumab levels valuable for assessing remission in children with Crohn’s disease

9th June 2021

Adalimumab is effective for children with Crohn’s disease but little is known about the impact of therapeutic monitoring on clinical outcomes.

The clinical symptoms of Crohn’s disease (CD) are similar in adults and children although there is evidence that cases of paediatric CD are on the rise, with one study estimating that the highest incidence, at 23 per 100,000 person-years occurred in Europe. Endoscopic evidence of mucosal healing is a valuable therapeutic goal that decreases the risk of disease relapse although little is known about the association between mucosal healing and therapeutic levels of biological treatments such as adalimumab. This prompted a team from the Department of Paediatrics, Samsung Medical Centre, Korea, to examine the relationship between therapeutic drug monitoring of adalimumab and mucosal healing and clinical remission in paediatric patients with CD. The team prospectively recruited paediatric patients with CD receiving adalimumab maintenance therapy and who underwent routine endoscopic evaluation of mucosal healing and therapeutic drug monitoring. Monitoring assessments were made at 4 months and then at years 1, 2 and 3.

In total, 31 children with a mean age of 14.8 years (74% male) were included in the analysis. After 1 year of treatment, 26 (83.9%) had achieved clinical remission and 17 (54.8%) had complete mucosal healing. The mean adalimumab trough levels were higher in patients who had achieved remission compared to those with active disease (7.6 mcg/ml vs 5.1 mcg/ml, remission vs active disease). Similarly, trough levels of adalimumab were significantly higher in those who achieved mucosal healing after 1 year (14.2 mcg/ml vs 7.8mcg/ml, mucosal healing vs non-healed, p = 0.03). Although only 23 children were evaluated after 3 years, adalimumab trough levels remained above 10 mcg/ml and a similar proportion of children maintained mucosal healing (64.3%) and clinical remission (92.9%). Using receiver operating curves, authors calculated that the optimal cut-off adalimumab trough levels to achieve mucosal healing was 8.18 mcg/ml.

In discussing their findings, the authors commented on the results demonstrated that mucosal healing rates increased when adalimumab was used over the longer term and that the drug maintained its efficacy. They concluded that there was merit in using therapeutic drug monitoring to guide proactive optimisation of drug levels to achieve the goal of mucosal healing.

Kim MJ et al. Therapeutic Drug Monitoring of Adalimumab During Long-term Follow-up in Paediatric Patients with Crohn Disease. JPGN 2021;72:870-6.