This website is intended for healthcare professionals only.

Hospital Healthcare Europe
Hospital Pharmacy Europe     Newsletter       

Press Releases

Take a look at a selection of our recent media coverage:

Non-invasive tests show similar prognostic value to histology for adverse outcomes in NAFLD

9th June 2023

The use of non-invasive tests provides similar prognostic accuracy compared to histology for adverse outcomes in patients with non-alcoholic fatty liver disease (NAFLD), according to a new study by a UK and European research group.

Published in the The Lancet Gastroenterology and Hepatology, the study used an individual participant data meta-analysis approach to compare the prognostic performance of liver histology for adverse clinical outcomes in adult patients with NAFLD with three non-invasive methods.

The three comparative tests were liver stiffness measurements by vibration-controlled transient elastography (LSM-VCTE), the fibrosis-4 index (FIB-4),and the NAFLD fibrosis score (NFS). The prognostic performance of the index tests was compared using the time-dependent area under the curve (tAUC). The team set the primary outcome as a composite endpoint of all-cause mortality, hepatocellular carcinoma, liver transplantation or cirrhosis complications.

Non-invasive tests performance

A total of 65 eligible studies, made up of 2,518 patients with biopsy-proven NAFLD and a mean age of 54 years (44.7% female), were analysed. Participants were followed for a median of 57 months and the composite endpoint occurred in 5.8% of patients.

At five years, the tAUC was 0.72 (95% CI 0.62 – 0.81) for histology, 0.76 (0.70 – 0.83) for LSM-VCTE, 0.74 (0.64 – 0.82) for FIB-4 and 0.70 (0.63 – 0.80) for NFS. However, pairwise differences were not significant.

Also at five years, there were similar sensitivities and specificities for histologically diagnosed cirrhosis and the three non-invasive markers. For example, histology had a cumulative sensitivity of 33.3% and a specificity of 90.5% and LSM-VCTE a sensitivity of 29.4% and a specificity of 92%.

The authors concluded that given how the non-invasive tests performed as well as histologically assessed fibrosis in predicting clinical outcomes in patients with NAFLD, the tests could be considered as alternatives to liver biopsy in some cases.

NAFLD in context

The global prevalence of NAFLD is estimated at 24%, and it poses a high risk of liver-related morbidity and mortality. This is largely due to, and from, extra-hepatic cancer and cirrhosis.

While it is recognised that more severe fibrosis – that is stages F3 and F4 – is associated with increased risks of liver-related complications and death, fibrosis staging can only be evaluated with a liver biopsy.

Although several non-invasive biomarkers exist and show good diagnostic accuracy, the prognostic value of these non-invasive tests in comparison to liver histology has not previously been assessed.

SGLT2 inhibitor use associated with lower risk of cancer

SGLT2 inhibitor drug use in patients with diabetes appears to reduce cancer risk, according to a retrospective analysis by Taiwanese researchers.

Epidemiological evidence suggests that diabetes increases the risk of cancer. It is thought the combination of hyper-insulinaemia, chronic inflammation and hyperglycaemia could increase the growth of tumours. Consequently, it may be possible to reduce this risk with anti-diabetic treatment. A recent meta-analysis of randomised clinical trials, suggests that SGLT2 inhibitor drugs could lower cancer risk compared to placebo. However, the extent to which these drugs might reduce the risk of cancer in practice is less clear.

The current study, published in the Journal of Diabetes and its Complications, retrospectively compared cancer development among SGLT2 inhibitor users. The team matched these patients with a group not prescribed these drugs. The primary outcome was cancer development and the analysis adjusted for several potential confounders.

SGLT2 inhibitor use and cancer risk

A cohort of 325,990 SGLT2 inhibitor users with mean age of 58.6 years (42.2% female) and 325,989 non-users was identified.

SGLT2 inhibitor users had a significantly lower cancer risk (adjusted hazard ratio, aHR = 0.79, 95% CI 0.76 – 0.83) than non-users. The risk of cancer was also higher among males (aHR = 1.35, 95% CI 1.30 – 1.41) and in patients aged 50-64 and older than 65 years.

Researchers also noticed that this risk reduction was dependent on the duration of SGLT2 inhibitor use. Short-term use (60-140 days) was actually linked to a higher cancer risk (aHR = 1.30, 95% CI 1.21 – 1.39).

In addition, while cancer risk was generally lower, there was a significant increased risk of pancreatic cancer (aHR = 1.51, 95% CI 1.22 – 1.87) which was consistent with the findings of a recent case study.

Ribociclib with endocrine therapy improves survival in early-stage breast cancer

The combination of ribociclib and standard endocrine therapy improves invasive disease-free survival rates more than endocrine therapy alone in patients with early stage, hormone receptor–positive, HER2-negative (HR+/HER2-) breast cancer, according to recent data presented at ASCO 2023.

The standard treatment for HR+/HER2- advanced or metastatic breast cancer involves the use of endocrine therapies such as aromatase inhibitors, selective oestrogen receptor (ER) modulators, and selective ER down-regulators. However, a problem with this approach is the development of treatment resistance, which therefore requires additional therapeutic strategies.

Ribociclib is a cyclin-dependent kinase 4/6 inhibitor, approved in combination with endocrine therapy for treatment of HR+/HER2- advanced or metastatic breast cancer in both pre- and post-menopausal women.

In a 2019 study, use of ribociclib and endocrine therapy significantly improved progression-free survival and had manageable toxicity in both pre- or peri-menopausal and postmenopausal women with HR+/HER2- metastatic or advanced breast cancer. However, whether this combination would benefit women with early stage breast cancer remains unclear.

The current NATALEE trial – a phase 3 multi-centre, randomised, open-label trial – is designed to evaluate the efficacy and safety of ribociclib with endocrine therapy as adjuvant treatment in patients with HR+/HER2- early breast cancer.

Ribociclib and invasive disease-free survival

In an abstract presented at the American Society of Clinical Oncology 2023, women with early stage breast cancer given ribociclib were found to have a statistically significant and clinically meaningful improvement in invasive disease-free survival (iDFS) compared to standard endocrine therapy alone.

For the trial, both pre- and post-menopausal women were randomised 1:1 to ribociclib (RIB) and endocrine therapy (ET) or ET alone for up to three years. RIB was given at a dose of 40 mg daily (three weeks on and one week off) and ET comprised letrozole 2.5 mg daily or anastrozole 1 mg day for longer than five years.

In total, 5,101 participants were randomised to either RIB and ET (2,549) or ET alone and followed for a median of 34 months. The combination of RIB and ET showed a significantly longer iDFS than ET alone (Hazard ratio, HR = 0.75, 95% CI 0.62 – 0.91, p = 0.014). In addition, the three-year iDFS rates were rates were 90.4% vs 87.1% respectively.

The abstract also reported how this iDFS benefit was generally consistent across stratification factors and other subgroups. Moreover, while not reported, the authors indicated that the secondary endpoints of overall survival, recurrence-free survival and distant disease–free survival consistently favoured RIB. In addition, at the dosage used (400 mg), RIB had a favourable safety profile with no new signals.

Use of colchicine associated with reduced risk of hip and knee replacement

8th June 2023

In an exploratory analysis of low-dose colchicine use to reduce the occurrence of cardiovascular events, researchers have found that the drug lowers the incidence of hip and knee replacements for osteoarthritis (OA).

Published in the Annals of Internal Medicine, researchers examined the effect of colchicine on the incidence of hip and knee replacements in participants within the LoDoCo2 trial. This randomised, double-blind, placebo-controlled trial was designed to explore the effects of colchicine on the risk of cardiovascular events in patients with a recent myocardial infarction. Patients with coronary artery disease received either colchicine 0.5 mg daily or matching placebo.

In the exploratory analysis of data from LoDoCo2, the primary outcome of interest was the time to the first knee or hip replacement.

Effect of colchicine on knee and hip replacements

A total of 5,478 participants with a mean age of 66 years (15.3% female), were randomised to either colchicine (2,762) or placebo and followed for a median of 28.6 months.

Overall, the use of colchicine reduce the risk of knee or hip replacements by 31% (hazard ratio, HR = 0.69, 95% CI 0.51 – 0.95).

During a sensitivity analysis, this finding remained significant when excluding patients with gout (HR = 0.68, 95% CI 0.49 – 0.94) and those with a knee or hip replacement within the first three (HR = 0.61) or six (HR = 0.58) months after randomisation.

Whilst the trial did not specifically collect data on the incidence of OA, or any information on joint pain or physical functioning, it clearly had some effect on the course of the disease. The authors called for further studies to investigate colchicine therapy in slowing disease progression in OA.

Joint replacements in context

OA is a condition associated with cartilage destruction, subchondral bone remodelling and inflammation of the synovial membrane which is thought to be due to an increased release of pro-inflammatory cytokines. In fact, use of canakinumab, an anti-interleukin-1β inhibitor and which is a pro-inflammatory cytokine, has been shown to reduce the incidence of both knee and hip replacement in patients with OA.

Colchicine works through tubulin disruption and this action down regulates multiple inflammatory pathways, suggesting that the drug may be of value as an anti-inflammatory agent.

Continued smoking after cancer diagnosis increases risk of adverse CVD event

Ongoing smoking following a cancer diagnosis, elevates the risk of adverse cardiovascular disease (CVD) events, according to the findings of a study by Korean researchers.

Emerging data indicates how continued smoking following a cancer diagnosis increases the risk of cardiovascular disease mortality.

In fact, nearly 20% of cancer survivors continue to smoke. However, the differential effect on adverse cardiovascular outcomes, of either quitting smoking, cutting down or continuing to smoke is less clear.

The current study, published in the European Heart Journal, assessed the effect of changes to smoking habits on adverse cardiovascular outcomes.

Smoking status was assessed two years before and three years after a cancer diagnosis. Participants were categorised as non-smokers; quitters, initiators and relapsers, and continued smokers.

The primary outcome was a composite of CVD events, comprising hospitalisation for myocardial infarction or stroke, or CVD death.

Smoking status and adverse cardiovascular outcomes

Among 309,095 cancer survivors with a median age of 59 years (51.8% women), 80.9% were non-smokers, 10.1% quit, 7.5% initiated or relapsed to smoking and 7.5% continued to smoke.

During a median follow-up of 5.5 years, 10,255 new CVD events occurred. Using non-smokers as the reference point, the adjusted hazard ratio (aHR) for a CVD event among quitters was 20% higher (aHR = 1.20, 95% CI 1.12 – 1.28). But among those who continued to smoke, the risk was 86% higher (aHR = 1.86, 95% CI 1.74 – 1.98).

There were clear benefits for those who quit smoking. For example, the CVD event risk was significantly lower among those who quit compared to participants continuing to smoke (aHR = 0.64, 95% CI 0.59 – 0.70).

These findings were consistent across both sexes as well as when classifying participants according to their primary cancers. Among those who continued to smoke, cutting down had no effect on their risk of a CVD event (HR = 0.99, 95% CI 0.80 – 1.22).

The authors suggested that continued smoking after a cancer diagnosis and its association with CVD events highlighted the urgent need for initiatives to promote smoking cessation and prevent smoking initiation and/or relapse among patients with cancer.

Diagnostic spirometry for COPD on the rise

Diagnostic spirometry is increasingly used to confirm the presence of chronic obstructive pulmonary disease (COPD) but there are still existing barriers to more widespread use, according to a recent analysis.

Published in the journal NPJ Primary Care Respiratory Medicine, Swedish researchers examined whether the proportion of patients with diagnostic spirometry had increased over time. The team originally explored spirometry use in 2005 but re-assessed the level of use following the introduction of national guidelines in 2014. In the current study, they also set out to determine any factors associated with omitted or incorrectly interpreted spirometry.  

Using data from medical reviews and a questionnaire from primary and secondary care patients diagnosed with COPD between 2004 and 2010, the researchers compared the findings from a cohort diagnosed between 2000 and 2003. 

Changes in use of diagnostic spirometry

Among 703 patients with a COPD diagnosis between 2004 and 2010, 88% of these had diagnostic spirometry, compared with 59% (p < 0.001) in the previous cohort. Furthermore, the correct interpretation of spirometry results also increased between the two periods (75% vs 82%; p = 0.010).

In further analysis, it became clear that factors associated with not having diagnostic spirometry were: current smoking (Odds ratio, OR = 2.21, 95% CI 1.36 – 3.60), low educational level (OR = 1.81, 1.09 – 3.02) and being managed in primary care (OR = 2.28, 95% CI 1.02 – 5.14). The authors speculated that the lower use of spirometry in current smokers was largely because physicians probably felt the diagnosis was more likely and hence did not require confirmation.

While greater use of diagnostic spirometry was encouraging, the authors suggested that there was still a need for continuous medical educational activities to increase diagnostic accuracy.

Spirometry in context

The use of diagnostic spirometry has been advocated as a means to identify COPD in those with airflow obstruction and respiratory symptoms. However, spirometry is under-used in practice, with a real-world study finding that data from the technique was only used in 43.5% of nearly 60,000 COPD patients.

In fact, not using diagnostic spirometry potentially means that patients could be either under- or over-diagnosed with the condition. For example, it has been suggested that approximately 70% of COPD worldwide may be under-diagnosed and 30-60% of patients over-diagnosed. 

An inadequate assessment with diagnostic spirometry has important implications for patient management. For example, a late COPD diagnosis, can result in a higher exacerbation rate, increased comorbidities and costs compared with an early diagnosis.

Should clinicians be sceptical about the reported findings in oncology randomised clinical trials?

A recent analysis suggests changes to primary endpoints occur frequently in randomised clinical trials in oncology, representing a risk of suboptimal – or even potentially harmful – patient care, together with outcome reporting bias. Rod Tucker investigates what this means for clinicians.

Evidence-based clinical practice is a fundamental paradigm of modern medicine and of huge importance in areas such as oncology where treatments have the potential to be curative. Consequently, there is an expectation that the data derived from randomised clinical trials (RCTs) – which reflect the highest level of scientific evidence – are complete, accurate and unbiased.

Equally important in RCTs is the primary endpoint. This represents the most important outcome and serves to assess the main objective of a trial. When designing a randomised trial, a crucial principle is that the endpoints, especially the primary endpoint, is set in advance. Failure to do so, can introduce bias but, more worryingly, creates opportunities for manipulation.

A lack of transparency

The importance of setting the primary outcome prior to commencing a study and not deviating from the original protocol, was first highlighted in 1990 by Jay Siegel, a physician and research scientist working for the FDA in the US. Siegel discussed how the published description of a study could differ from the original protocol – for instance, changes to the primary endpoint, sample size or statistical tests.

Thus, in a double-blind RCT, if researchers discovered once masked allocation was revealed, that the results didn’t suit their needs, it might be possible to ‘improve’ the findings through manipulation of the data. Siegel suggested that this might happen by reporting on fewer patients or even focusing on the findings from a particular subgroup. In the absence of transparency, he felt that both the pre-publication reviewers and those reading the final published article would not be aware of any retrospective protocol modifications and thus be unable judge the reliability of the final results.

This lack of transparency in clinical trial reporting, exemplified by inadequate access to trial protocols and selective reporting in the final published results, has been suspected for many years, but there was a lack of proof that the practice was used.

This suspicion was confirmed in a 2004 analysis by researchers from Oxford. In an analysis of 102 trials, the researchers uncovered that 50% of efficacy and 65% of harm outcomes per trial were incompletely reported. In addition, when the published articles were compared to the original protocol, 62% of trials had at least one primary outcome that was changed, introduced, or omitted.

Later work simply re-affirmed this finding. In a 2015 systematic review of registered and published outcomes in randomised trials, researchers found that discrepancies between registered and published outcomes of clinical trials were common. Furthermore, this occurred regardless of either the body funding the study or the journal in which the articles were published.

Frequency of primary endpoint changes

But did the use of selective reporting influence trial outcomes? In other words, how often did researchers alter the original primary outcomes to ‘fit’ the available data and consequently only publish positive findings?

This was the question considered in a recent study published in JAMA Network Open by a multi-disciplinary team of US researchers. Focusing on RCTs in oncology, the team examined the frequency with which changes to the primary endpoint were made (i.e. deviations from the original protocol) and, more importantly, if these changes were reported by the authors of a published study.

In addition, the team went one step further and also explored whether there was a relationship between changes to the primary endpoint and the trial outcomes. They examined possible changes using three different methods: the history of tracked changes on ClinicalTrials.gov; self-reported changes noted in the article; and changes reported within the protocol, including all available protocol documents.

With data from the inception of ClinicalTrials.gov to February 2020, researchers uncovered 755 trials, of which 19.2% (145 trials) had a primary endpoint change using at least one of the three detection methods. Somewhat concerning was how among these 145 trials, 70.3% failed to disclose the change in the primary endpoint within the published manuscript. In further analysis, the researchers found that changes to the primary outcome were significantly associated with trial positivity (odds ratio, OR = 1.86, 95% CI 1.25 – 2.82, p = 0.003). While the study did not provide ‘proof’ of selective reporting, the findings did suggest that researchers had changed the outcome of interest to, as Siegel suggested 30 years earlier, ‘improve’ the results.

There are, of course, perfectly legitimate reasons for making changes to a primary endpoint in a study. For example, a high rate of discontinuation or slow accrual of suitable patients would make it difficult to fully assess the impact of a particular intervention. It is also possible that, as the trial continues, emerging evidence from other studies may suggest a more appropriate endpoint. Nevertheless, it is incumbent on trialists to communicate such changes. Based on the current evidence, this does not appear to be a common practice.

Could RCT reporting standards be improved?

Clinical trialists have, for many years, had access to the Consolidated Standards of Reporting Trials (CONSORT) guidance, which was first introduced in 1996. This provided a checklist and flow diagram that authors should use for reporting a randomised clinical trial. Updated in 2010, the statement was further revised in 2022, providing 17 outcome-specific items that should be addressed in all published clinical trial reports, designed to increase transparency and hopefully to minimise the risk of selective non-reporting of trial results.

Although revising primary endpoints after commencement of a clinical is not unreasonable, especially in light of new and relevant scientific knowledge, the practice should remain uncommon. A further difficulty is that many high impact journals, while endorsing CONSORT, fail to ensure compliance with the guidance, as revealed in a recent analysis, with many rejecting correction letters documenting study shortcomings.

Whether reporting of changes to primary endpoints or other outcomes will be mandatory in the near future remains uncertain. As a result, with nearly a fifth of trials failing to document such changes, it seems likely that selective reporting will continue to plague trials. At the present time, it seems that full transparency is an ambition as opposed to a reality. Clinicians therefore have every right to remain somewhat sceptical about the reported findings in oncology randomised clinical trials.

Could supplementing with vitamin D reduce the risk of long Covid?

A recent study showing that patients with long Covid had reduced vitamin D levels, raises the possibility that supplementing with the vitamin to ensure adequate levels may protect against this post-infection sequela. Rod Tucker considers the evidence.

A recent observational study published in The Journal of Clinical Endocrinology & Metabolism (JCEM), found that among patients initially hospitalised with Covid-19, those who developed long Covid had significantly lower vitamin D levels than a group of matched controls without the condition six months later.

This suggests that supplementing with the vitamin could help mitigate long Covid. Nevertheless, in order to prevent long Covid (also referred to as post-Covid condition), vitamin D should also have a role in protecting against infection with the virus in the first place.

With more than three years having passed since the start of the pandemic, is there now convincing evidence that supplementing with vitamin D reduces the risk of infection with Covid-19 virus and therefore the risk of long Covid?

Vitamin D and Covid-19

The role of vitamin D in protecting against viral infections has been known for some time. In 2013, a systematic review of 11 placebo-controlled trials found that vitamin D had a protective effect against respiratory tract infections and these findings were confirmed in an update review from 2021.

Following the emergence of the Covid-19 virus, there was renewed interest in these findings, given how Covid-19 was perceived as a respiratory pathogen. Additionally, further support for the potential role of vitamin D came from observational studies demonstrating an inverse relationship between serum 25-hydroxyvitamin D concentrations and the incidence or severity of Covid-19 and how low serum vitamin D levels represented an independent risk factor for both Covid-19 infection and hospitalisation.

Given the relationship between low serum vitamin D levels and Covid-19, could supplementing with the vitamin reduce the risk of infection? The answer it seems, is only maybe. The available evidence is somewhat mixed though one study sounded a note of caution, suggesting how supplementing with the vitamin was potentially harmful.

This caution arose from an Italian study, undertaken in Lombardy during the early part of the pandemic, among those hospitalised with Covid-19. It observed that in patients previously taking taking vitamin D supplements, there was a trend towards higher mortality. As this was an observational study, the evidence was potentially unreliable. The real proof could only be derived from a randomised trial.

Fortunately, one such trial in a group of front-line healthcare workers with suboptimal vitamin D levels, found a significantly lower risk of infection among those supplementing with vitamin D. In contrast, however, a test-and-treat study in patients with suboptimal vitamin D levels, concluded that treatment with the vitamin was not associated with a reduction in the risk of Covid-19 infection.

Other work showed how giving patients hospitalised Covid-19 a vitamin D boost had no effect on their length of hospital stay, but the use of vitamin D did reduce the risk of death and intensive care unit (ICU) admission. More recently, a Spanish study comparing cholecalciferol or calcifediol supplementation, observed how both were associated with a lower risk of infection, less severe infection and a lower risk of death but only where serum vitamin D levels exceeded 30 ng/ml.

A protective role against long Covid

Although not definitive, it seems that vitamin D may well offer some protection against infection with Covid-19 and the subsequent adverse health outcomes such as ICU admission. But does vitamin D also exert a protective role against the development of long Covid and would supplementation help, particularly those with low serum levels?

Unfortunately, the answer is far from clear, but the available data does not seem to hold much promise. For instance, in one analysis of those with long Covid, characterised predominately by fatigue and reduced exercise tolerance, the authors concluded that these symptoms were independent of vitamin D levels. Other work has also failed to identify any relationship between vitamin D deficiency and post-Covid symptoms.

Part of the problem is that the risk factors for developing long Covid are less well defined than for infection. Long Covid is associated with a myriad of both physical and mental symptoms that negatively impact on quality of life. It has even been suggested that many long Covid symptoms are not directly due to the virus itself, but that the Covid virus is able to reactivate the Epstein-Barr virus.

While supplementing with vitamin D may well help to reduce the risk of developing a Covid-19 infection and the subsequent Covid-related adverse outcomes, the role of the vitamin in long Covid is more uncertain. Although the recent JCEM study highlighted that low vitamin D levels were associated with long Covid, there is an urgent need for randomised trials to explore the possible value of supplementing with the vitamin as a means of attenuating this debilitating post-infection sequela.

Are anti-diabetic drugs the silver bullet for the obesity epidemic?

5th June 2023

Anti-diabetic drug lead to weight loss, providing a much needed impetus in the fight against the rising global obesity epidemic, but are these drugs the ultimate solution? Rod Tucker investigates.

According to recent data released by Boehringer Ingelheim and Zealand Pharma, their novel glucagon/GLP-1 receptor dual agonist, BI 456906, designed as an anti-diabetic medicine, gave rise to a 14.9% weight loss in those either obese or overweight compared with placebo.

Though not yet commercially available, BI 45609 is likely to join a long list of anti-diabetic treatments being used in the fight against obesity. Such innovations are urgently needed given the inexorable rise in global levels.

For example, a recent report from the World Obesity Federation described how in 2020, an estimated 2.6 billion people globally had a body mass index (BMI) greater than 25 and therefore classed as overweight. This figure is projected to rise to four billion by 2035.

Obesity therefore represents a major public health concern, especially given how it is associated with as many as 18 co-morbidities including cardio-metabolic disorders and several types of cancer.

The pharmacological management of obesity has always been challenging, with the currently available anti-obesity medications often delivering insufficient efficacy. Part of the problem has been unravelling the complex hormonal milieu that exists in obese individuals and which hormones to target with drugs.

Despite this, an incidental finding in the late 1980’s, paved the way for the current paradigm in obesity management, yet it was to take many more years before researchers fully appreciated the implications of what they discovered.

Anti-diabetic role of GLP-1

In 1998, it was already known that glucagon-like peptide 1 (GLP-1), a hormone that caused the release of insulin from the pancreas and suppressed glucagon release, also produced an anti-diabetic effect through lowering blood sugar. But when researchers gave an intravenous infusion of GLP-1 to healthy young men, it not only lowered blood glucose but enhanced satiety and fullness, reducing energy intake by up to 12% compared to saline.

At the time, researchers failed to understand the importance of these results and it took more than 20 years to understand the weight-lowering effect of GLP-1 agonists. The importance of this effect came to prominence in a 2017 study of the GLP-1 agonist, semaglutide, in those with type 2 diabetes. The drug led to weight losses of up to five kilograms, prompting a further study – this time in overweight and obese patients without diabetes.

The results showed that a weekly injection of semaglutide to people with a BMI greater than 30 led to a mean reduction in body weight of -14.9% compared to only -2.4% with placebo. As an added bonus, the drug also improved cardiometabolic risk factors.

But GLP-1 was not the only hormonal target in obesity. Glucose-dependent insulinotropic polypeptide (GIP) is a gut hormone responsible for increasing insulin secretion after the intake of oral glucose. The drug tirzepatide, for example, which is a dual GLP-1-GIP agonist, is able to reduce body weight by more than 20 per cent.

Another somewhat counter-intuitive target is the glucagon receptor. Under normal circumstances, stimulation of this receptor increases the release of glucose, but work in 2009 demonstrated how a glucagon-GLP-1 co-agonist, reduced body weight among diet-induced obese mice. Scientists are now taking this innovation a step further with triple receptor agonist drugs, most recently, LY3437943, which has been shown to decrease body weight.

Obesity management

With anti-diabetic drugs now incorporated into obesity management guidelines, have we at last found the silver bullet to stem the rising tide of obesity? Probably not.

It’s widely acknowledged that these new anti-obesity drugs are only one part of a comprehensive approach to obesity management, alongside diet and exercise. Indeed, NICE recommended use of semaglutide alongside a reduced-calorie diet and increased physical activity as a therapeutic option for weight loss in March 2023. Furthermore, once stopped, any weight lost with these drugs is quickly regained, largely because the decline in energy expenditure favouring weight regain persists long after the period of weight loss.

Ultimately, perhaps weight loss per se should not be seen as the most relevant metric. Patients who lose weight with anti-diabetic drugs also need to adopt a healthy diet and accept increased physical activity as a life-long norm.

Although some degree of weight regain is inevitable once treatment has stopped, it is recognised that the adoption of healthy lifestyle habits reduce mortality, irrespective of body mass index.

Ropinirole delays time to disease progression in amyotrophic lateral sclerosis

2nd June 2023

Ropinirole use in amyotrophic lateral sclerosis (ALS) significantly delays the time to the first disease progression, according to a recent, but small, randomised placebo-controlled trial with an open-label extension phase.

ALS is a rare neurological disease affecting motor neurons (MNs) in the brain and spinal cord that control voluntary muscle movement. ALS is also a progressive disease characterised by muscle atrophy and weakness caused by selective vulnerability of MNs. In Europe, only one drug, riluzole is licensed for the treatment of ALS. Ropinirole is a dopamine D2 receptor agonist that is approved for the use in the treatment of Parkinson’s disease. However, recently, drug-based screening studies have revealed how ropinirole may also be effective in ALS.

In the current study, published in the journal Cell Stem Cell, Japanese researchers conducted a phase 1/2a, randomised, double-blind, placebo-controlled trial, followed by an open-label extension, of ropinirole in patients with ALS. A total of 20 patients with sporadic ALS received either ropinirole or placebo (3:1) for 24 weeks in the double-blind phase of the study, in which safety, tolerability and the therapeutic efficacy were assessed. This was followed by a four- to 24-week open-label extension study in which patients originally assigned to placebo switched to ropinirole.

For the study, a number of parameters were assessed with the primary outcome based on adverse events. In total, there were 11 secondary outcomes including functional outcomes such as the revised ALS functional rating scale (ALSFRS-R) score, which assessed patient’s disability; survival; and time to the first disease progression event.

Effect of ropinirole on functional outcomes

In the 24-week double-blind part of the trial, 13 patients received ropinirole and seven a placebo. There was no significant difference in the overall incidence of adverse events (92.3% vs 85.7%, ropinirole vs placebo). In addition, there was no significant effect of treatment on ALSFRS-R scores (mean between group difference, MBGF = 1.46, 95% CI -3.15 to 6.07).

However, beyond the initial 24-week period, there was a persistent increase in between-group differences in ALSFRS-R scores (MBGF = 9.86, 95% CI 4.07 – 15.66, p < 0.001). Differences in survival favouring ropinirole occurred but were only apparent during the open-extension phase. However, across the whole trial period, this difference was significant (median difference = 9.0, 95% CI 1 – 12).

One of the most impressive findings was how the ropinirole group had a longer period of time (27.9 weeks) before their first disease progression event (p = 0.008).

The authors estimated the effect size of ropinirole on the ALSFRS-R score over 48 weeks to be 1.46 to 9.86. This translated into a 21-60% slower rate of functional decline, which was considered to be clinically meaningful. Nevertheless, they recognised that the small sample size – of 18 completing the 24-week double-blind phase and only eight completing the extension phase – limited the generalisability of their findings.

x