This website is intended for healthcare professionals only.
Take a look at a selection of our recent media coverage:
7th July 2023
Cases of melanoma in situ are on the rise, but why is this the case and does it pose a greater risk of death among those diagnosed with the cancer? Clinical writer Rod Tucker considers the evidence.
Globally, there were an estimated 325,000 diagnoses of cutaneous melanoma – or simply melanoma – in 2020, with a corresponding 57,000 deaths. Based on current trends, it is estimated that there could be a 68% increase in melanoma-related deaths by 2040, although there are hopes that a melanoma vaccine used in the advanced stages of the disease might reduce these deaths.
Melanoma is now the 17th most common cancer worldwide, with cases steadily rising over the last 40 years. For example, the annual number of melanoma cases in men has increased from 837 per year between 1981 and 1985 to 6,963 per year between 2016 and 2018 – a more than eight-fold increase.
While invasive melanoma cases have increased substantially over the last 30 years, there has been a greater rise in the level of melanoma in situ, also known as a stage 0 melanoma, which are essentially benign lesions. The magnitude of the increase of melanoma in situ was revealed in a 2022 comparative study of invasive and in situ melanoma incidence in three predominately white populations in Australia, the US and Scotland.
The team considered the average annual percentage rate of change (AAPC) for the two types of melanoma and the results were somewhat surprising. While there was a significant increase in the AAPC for invasive melanoma in all three areas between 1982 and 2018, the largest increase was for melanoma in situ. For example, in white US males, the AAPC for invasive melanoma was 2.9 whereas for melanoma in situ, the figure was 8.7. A similar level of increase was seen across all three populations.
The purported culprit for the rise in melanoma cases is increased sun exposure, largely because of UV radiation. But does this really account for the observed increased incidence?
If greater exposure to sunlight believed to be responsible to the higher levels of melanoma, what does the research actually show?
In a meta-analysis of risk factors for melanoma focusing on sun – and therefore UV radiation – exposure, the authors extracted data from 57 studies and provided relative risk (RR) estimates for the different types of sun exposure. For total sun exposure, the risk of melanoma increased by 34% (RR = 1.34) and by 61% for intermittent exposure (RR = 1.61). In addition, there was a much stronger risk in those with a history of sunburn (RR = 2.03).
Yet, even in the worse case scenario, exposure to sunlight leads to no more than a two-fold increase in the risk of melanoma, much less than the actual eight-fold increased alluded to earlier. So what else could account for this discrepancy?
One possible reason for the higher incidence of melanoma in situ is that it is merely an artefact of increased skin cancer screening. This, it seems, is a more plausible explanation and there is some evidence to support it.
One study from 2012 suggested that today’s dermatopathologists are more inclined to diagnose melanoma than their predecessors. The study looked at biopsy specimens diagnosed as dysplastic nevi with severe atypia from 20 years earlier. Interestingly, researchers observed a general trend towards the reclassification of severely atypical melanocytic tumours as melanoma by all study participants. As the diagnosis of cutaneous melanocytic lesions relies on a pathologist’s visual assessment of biopsy material, it is ultimately somewhat subjective.
This point was clearly illustrated in a study from 2017 which considered the interpretation of pathologists’ diagnoses of melanocytic skin lesions. When a pathologist was asked to rate the same skin biopsies on two occasions, eight months apart, they demonstrated good reproducibility. However, the intraobserver reproducibility was much lower and, as the authors concluded, ‘diagnoses spanning moderately dysplastic nevi to early stage invasive melanoma were neither reproducible nor accurate.’
If the rise in overall melanoma diagnoses merely reflects an greater detection of melanoma in situ, is this still a cause for concern? To put it another way, to what extent do these benign lesions undergo malignant transformation and thus increase the risk of death?
This was the question addressed in a recent study published in JAMA Dermatology. The researchers examined the mortality risk among those diagnosed with a melanoma in situ. Following 137,872 patients with a first and only diagnosis of melanoma in situ, the researchers observed that 15 years after their diagnosis, the melanoma-specific survival was 98.4%.
Perhaps more interesting was the fact that the 15-year relative survival compared to the general population was 112.4%. In other words, virtually everyone diagnosed with a melanoma in situ was not only still alive 15 years later, but appeared to outlive those without the cancer.
But what if a melanoma in situ becomes invasive? Using a subgroup of 156,251 patients, the researchers observed that the 15-year cumulative incidence of a secondary primary invasive melanoma was 8.9% and 14.1% for a second primary melanoma in situ. Nonetheless, this had little impact on either melanoma or relative 15-year mortality which remained virtually unchanged at 98.2% and 126.7% respectively. Even among those who developed a second melanoma in situ, there was only a small reduction in both the 15-year melanoma and relative survival.
Based on these findings, questions have been raised as to whether it is time to stop skin cancer screening. In fact, this was the subject of a recent study by Australian researchers who tried to estimate the possible risk of over diagnosis of melanoma by the comparing the subsequent melanoma incidence and biopsy rates among people who either did or did not undergo skin screening.
When analysing all melanomas arising within the first year following recruitment, there was a 59% higher risk of a melanoma in situ but not for invasive melanomas. The authors calculated that the absolute risk of developing a melanoma after five years, in those who underwent screening would be 1.94% compared to 1.45% in those who were not screened. This, they said, equates to a number needed to screen to detect one excess melanoma of about 206.
Although this was a theoretical study, real-world data also suggests that failure to screen patients does not increase the risk of missed diagnoses. For example, the interruption of screening due to Covid-19 did not lead to unfavourable primary tumour characteristics of melanoma.
While it might seem prudent to continue screening but only for high risk patients, this is by no means an easy task given how the risk factors for melanoma are multi-factorial and based on both genetic and personal attributes.
With the development of new AI-based technology, it is anticipated that skin cancer screening could become both faster and possibly more targeted. Nevertheless, medico-legal concerns of a missed diagnosis are unlikely to recede in the near future, prompting an increased level of screening.
Perhaps it is time to raise the diagnostic bar for biopsies and avoid the labelling of benign lesions like melanoma in situ, especially given the devastating impact of a cancer diagnosis – even if unwarranted – on a patient’s wellbeing.
6th July 2023
Premature ovarian insufficiency (POI) in the majority of women is not due to autosomal dominant variants in genes, according to a recent analysis.
Published in the journal Nature Medicine, researchers set out to systematically evaluate the penetrance of purported pathogenic gene variants using exome sequence data in women with POI.
They considered 67 genes used in the Genomics England diagnostic gene panel for POI, and identified an additional 38 genes from additional literature.
The team used exome sequence data in 104,733 women from the UK Biobank, 2,231 of whom (1.14%) reported natural menopause under the age of 40 years.
The analysis showed limited evidence to support any previously reported autosomal dominant effect. In fact, for nearly all heterozygous effects on previously reported POI genes, researchers were able to rule out even modest penetrance. For instance, 99.9% of all protein-truncating variants were present in reproductively healthy women.
Taken together, these results suggest that in the vast majority of women, POI is not actually caused by autosomal dominant variants either in genes previously reported or currently evaluated in clinical diagnostic panels.
In other words, while there are specific genetic variants in women who experience premature menopause, it is unlikely that these variants are the underlying cause, since many are also found in those who experience a normal age menopause.
POI affects an estimated 1% of the general population and results in cessation of ovarian function before the age of 40 years. Moreover, women with POI find that menstruation also stops around the same age. In recent years, it has become clear that POI is likely to have a genetic basis and although several candidate genes have been identified, it appears that POI is a genetically heterogeneous condition.
Colchicine is a drug traditionally used for an acute attack of gout, but its most recent FDA approval has seen it repurposed for the management of atherosclerotic cardiovascular disease. Clinical writer Rod Tucker considers the evidence and what this means for CVD management.
Most clinicians will be familiar with the use of the anti-inflammatory agent colchicine as a treatment for acute attacks of gout, which is surprising given the lack of good quality evidence for the drug. But, recent events have put the drug on the map for a different purpose.
In late June 2023, the US Food and Drug Administration approved colchicine 0.5 mg for use in patients with cardiovascular disease to reduce adverse cardiac events. But how did a relatively inexpensive and widely used drug suddenly assume an important role in the management of atherosclerotic cardiovascular disease?
The prevailing wisdom is that atherosclerosis is due to the accumulation of cholesterol within the intimal of arteries and necessitates lipid-lowering therapy. An alternative cause, first mooted in 1999, has, until recently, been largely ignored. However, emerging evidence now implies that inflammation, rather than hypercholesterolaemia, is a more important driver of atherosclerosis, hence the rationale for the use of anti-inflammatory agents such as colchicine.
The fact that inflammation has a significant role in the development of atherosclerosis arose following the publication of the CANTOS study with canakinumab, which targets the pro-inflammatory agent interleukin-1β. In the trial, the use of canakinumab significantly reduced the primary efficacy endpoint of nonfatal myocardial infarction, nonfatal stroke or cardiovascular death compared to the placebo.
While CANTOS clearly showed how reducing a single inflammatory marker lowered the risk of adverse cardiac events, earlier research had strongly implicated that neutrophils played a part in atherosclerosis.
The possible role of neutrophils in heart disease has been recognised for some time. In 1989, researchers identified an enhanced neutrophil function in patients with ischaemic heart disease although just where neutrophils sat in the pathophysiology of atherosclerosis remained unclear.
It was evident from a study in 1994, that inflammation was present at the immediate site of an atherosclerotic plaque rupture or erosion, leading to speculation that inflammatory changes had a pivotal role in destabilising the fibrous cap of an atherosclerotic plaque, enhancing the risk of coronary thrombosis.
The link between inflammation and neutrophils finally became much more intelligible in 2002, when it was discovered that neutrophil infiltration was actively associated with acute coronary events. Acknowledging the importance of neutrophils in cardiovascular disease, researchers then wondered if a drug that could inhibit the function of neutrophils might be advantageous to patients with cardiovascular disease.
Colchicine works by blocking the assembly and polymerisation of microtubules. These microtubules have numerous roles within cells including maintenance of cell shape, intracellular trafficking, cytokine and chemokine secretion, cell migration and the regulation of ion channels and cell division. But one important consequence of preventing the formation of microtubules is interference with neutrophil adhesion and recruitment to inflamed tissue.
It therefore seemed possible that a drug such as colchicine, might prove invaluable in patients with atherosclerotic cardiovascular disease. Whether this theoretical effect would benefit patients in practice remained to be seen.
The road to the current approval of colchicine in cardiovascular disease was a long one, and the earliest attempts were disappointing.
In a 1992 study, scientists explored the value of the drug at preventing restenosis in patients following angioplasty, although colchicine proved to be no more effective than placebo. Fast forward to 2013, a study among patients who had recently experienced a myocardial infarction found that a daily dose of colchicine 0.5 mg combined with statin therapy appeared to be effective for the secondary prevention of cardiovascular events in patients with stable coronary disease.
Over the next seven years, more positive findings rolled in. For example, the secondary preventative value of colchicine was replicated in a 2019 study. Additionally, colchicine reduced adverse outcomes, in patients with any evidence of coronary disease and in those following either a recent (six to 24 months) or a prior (two to longer than seven years) acute coronary syndrome.
Assimilating the results from available studies, a 2021 meta-analysis of randomised trials with low-dose colchicine (0.5 mg), concluded that the drug lowered the risk of MACE, myocardial infarction, stroke and the need for coronary revascularisation in a broad spectrum of patients with coronary disease.
Given that atherosclerotic cardiovascular disease is largely assumed to be a direct consequence of elevated cholesterol, how important is the presence of inflammation?
A recent analysis, published in The Lancet, directly addressed this question. Researchers turned to three major statins trials in patients with, or at high-risk of, atherosclerotic disease to analyse the relative importance of inflammation and hypercholesterolaemia. The findings were very clear: inflammation rather than elevated levels of LDL cholesterol was the stronger predictor of future risk for both cardiovascular events and death.
While the mainstay of cardiovascular disease management over the past 20 years has been predicated on the notion that hypercholesterolaemia is a major cause, recent data does indeed suggest that inflammation is actually a more relevant prognostic marker.
With cardiovascular diseases still the leading cause of global deaths, the approval of colchicine is recognition of the need for a paradigm shift in the care of patients with the disease, and this will hopefully make a greater impact on overall mortality.
5th July 2023
A cyclophosphamide-based regime significantly reduces the incidence of graft-versus-host disease (GVHD) after stem-cell transplant compared to standard prophylaxis, according to the findings of a recent phase 3 randomised trial.
Published in the New England Journal of Medicine, the researchers randomly assigned adults with haematologic cancers undergoing allogeneic haematopoietic stem-cell transplantation to receive either cyclophosphamide-tacrolimus-mycophenolate mofetil (experimental prophylaxis) or tacrolimus-methotrexate (standard prophylaxis).
The primary endpoint was set as GVHD-free, relapse-free survival at one year, assessed in a time-to-event analysis. Events were defined as grade III or IV acute GVHD, chronic GVHD warranting systemic immunosuppression, disease relapse or progression and death from any cause.
This comes after a phase 2 trial confirmed that using tacrolimus with mycophenolate mofetil and post-transplantation cyclophosphamide was the most promising intervention for GVHD prophylaxis.
In the phase 3 trial, a total of 431 patients with mean age of 64.3 years (60.3% male) were included, of whom 214 were assigned to the experimental regime and followed for a median of 12 months.
The primary outcome was significantly more common in the experimental prophylaxis group (hazard ratio, HR = 0.64, 95% CI 0.49 – 0.83, p = 0.001). In fact, after 12 months, the adjusted GVHD-free, relapse-free survival was 52.7% with the cyclophosphamide regime and only 34.9% with standard prophylaxis.
In addition, while there was less severe acute or chronic GVHD and a higher level of immunosuppression-free survival at 12 months, there were no substantial differences between the two groups for overall and disease-free survival, relapse and transplantation-related death.
Combining a calcineurin inhibitor such as tacrolimus with methotrexate is standard therapy to prevent GVHD disease following marrow transplantation for leukaemia. Despite this, 30-50% of patients undergoing allogeneic haematopoietic-cell transplantation still develop acute GVHD.
Along with the latest phase 3 trial findings outlined above, emerging data suggests high-dose post-transplantation cyclophosphamide can safely and effectively limit GVHD.
Amlitelimab has met its primary endpoint of an improvement in disease severity of atopic dermatitis in a recent phase 2b dose-ranging trial, its manufacturer Sanofi has announced.
Amlitelimab provided statistically significant improvements in signs and symptoms of moderate-to-severe atopic dermatitis in adults whose disease could not be adequately controlled with topical medications or where topical medications are not a recommended treatment approach.
A non-depleting IgG4 human anti-OX40L monoclonal antibody, amlitelimab’s OX40–OX40L interaction appears to play a central role in the pathogenesis of atopic dermatitis. The drug binds to OX40L and in doing so, blocks its interactions with OX40, thus restoring immune homeostasis between pro-inflammatory and anti-inflammatory T-cells. Consequently, the drug offers a potential first-in-class novel antibody therapy for patients with moderate to severe atopic dermatitis.
Amlitelimab (formerly KY1005) was found in a pharmacokinetic study to reduce skin redness, indicating its potential as a novel pharmacological treatment in immune-mediated disorders. Earlier work revealed how amlitelimab leads to a significant reduction of interleukin-13 levels with a corresponding reduction in dermatitis disease severity. Furthermore, the drug also improves disease severity through targeting interleukin-22.
The latest information released by Sanofi relates to STEAM-AD – a phase 2b, double blind, five-arm study assessing amlitelimab in adult participants with moderate to severe atopic dermatitis. While the company did not provide specific details, it did report that amlitelimab had enrolled 390 people from several countries. The primary endpoint was the percentage change in the Eczema Area and Severity Index score from baseline at 16 weeks.
The detailed efficacy and safety results from the trial are to be presented in a future scientific forum and, as amlitelimab is currently under clinical investigation, the safety and efficacy have not yet been evaluated by regulatory authorities.
Dr Naimish Patel, head of global development, immunology and inflammation at Sanofi, said: ‘While we have made significant strides in the treatment of atopic dermatitis, there are patients who are still in need of new options.
‘We believe that the results from this Phase 2b study with amlitelimab support our perspective that targeting OX40-Ligand has the potential to provide a first and best-in-class treatment option that addresses type 2 and non-type 2 inflammation to meet the individual needs of people living with atopic dermatitis and other chronic inflammatory diseases.
‘We look forward to advancing into a larger Phase 3 clinical development programme and continuing to drive momentum in our immunology pipeline to deliver first or best-in-class treatments.’
Using opioid analgesics for patients with acute low back or neck pain offers no significant pain relief advantage compared to placebo, according to a recent randomised trial.
The OPAL trial, recently published in The Lancet, investigated the efficacy and safety of a short course of an opioid analgesic for acute low back and neck pain. Patients with 12 weeks or less of low back or neck pain (or both), and of at least moderate pain severity, received guideline-recommended care plus either oral oxycodone (up to 20 mg daily) or identical placebo for no longer than six weeks. The primary endpoint was pain intensity at six weeks measured on a 0-10 scale.
A total of 347 participants with a mean age of 44.7 years (51% male) were randomly assigned to either group, with 174 given the opioid oxycodone.
When assessed after six weeks, the mean pain score was 2.78 in the opioid group compared to 2.25 in the placebo group (adjusted mean difference = 0.53, 95% CI -0.00 to 1.07, p = 0.051). In fact, more patients in the opioid group had ongoing pain at weeks 26 and 52 than in the placebo group.
While there were no significant differences in pain relief, a similar proportion of participants in the two groups experienced at least one adverse event (p = 0.30).
The researchers concluded that there was no good evidence that opioids should be prescribed for people with acute non-specific low back or neck pain.
Clinical guidelines suggest that the use of opioid drugs in the management of acute low back pain should only be used in carefully selected patients and for a short duration. Despite this recommendation, an Australian study revealed how among 6,393 patients with a diagnosed lumbar spine condition, 69.6% received opioids. For general acute pain, IV paracetamol is known to provide similar relief to NSAIDs and opiates.
3rd July 2023
Desmopressin may improve the outcomes for patients who experience a spontaneous intracerebral haemorrhage (ICH) while taking antiplatelet drug therapy, according to the results of a small feasibility study.
Among patients taking antiplatelet therapy, the cumulative incidence of an ICH at one year sits at 22.5%. Following an ICH, many patients are left dependent on others and, currently, there are no proven effective drug treatment.
However, desmopressin could help ICH patients, especially given how the drug reduces bleeding and transfusion requirements for people with platelet dysfunction or with a history of recent antiplatelet drug administration undergoing cardiac surgery.
Published in The Lancet Neurology, a team of UK researchers set out to examine the feasibility of randomising patients on antiplatelet drug therapy with spontaneous ICH to desmopressin or placebo to reduce the effect of their antiplatelet drug.
The team undertook a phase 2, randomised, placebo-controlled, feasibility trial in adult patients taking antiplatelet drugs who experienced an ICH with stroke symptom onset within 24 hours of randomisation. Individuals received a single dose of intravenous desmopressin 20 μg or matching placebo.
As this was a feasibility trial, the primary outcome was measured as the number of eligible patients randomised and the proportion of eligible patients approached. In addition, several secondary outcomes were examined 90 days after randomisation, including assessments of disability, functional independence, cognition and quality of life.
From a total of 1,380 participants, 13% were potentially eligible and of these 32% were approached. Overall, 54 (31%) consented and were equally randomised between desmopressin and placebo.
At day 90, there was no significant difference in functional status based on the full modified Rankin Scale score (odds ratio, OR = 0.74, 95% CI 0.29 – 1.93) or for death and dependency (OR = 0.49, 95% CI 0.15 – 1.61).
Serious adverse events occurred in a similar proportion of patients (44% vs 48%, desmopressin vs placebo), which included expansion of the haemorrhagic stroke and pneumonia.
Based on these findings, the researchers concluded that a definitive trial is now required to determine if desmopressin improves outcomes in patients with an ICH and who are using antiplatelet therapy.
The incidence of type 1 diabetes in children and adolescents significantly increased during the Covid-19 pandemic compared to pre-pandemic levels, according to a recent meta-analysis.
Suggestions of an association between infection with Covid-19 and a new diagnosis of of type 1 and type 2 diabetes emerged early in the pandemic. However, the causal mechanisms responsible are unclear. Moreover, understanding the nature of any relationship between diabetes and infection with Covid-19 is complicated by several factors including the seasonality of diagnoses and evidence of an estimated 3.4% annual increase in the incidence of the condition.
In trying to untangle the potential association between the rise in cases of type 1 diabetes and infection with Covid-19, a team of Canadian researchers, writing in JAMA Network Open, compared the incidence rates of paediatric diabetes during and before the Covid-19 pandemic.
The team undertook a systematic review and meta-analysis of all medical databases, using subject headings and text terms related to Covid-19, diabetes and diabetic ketoacidosis (DKA). Studies were included in the analysis if these reported differences in incident diabetes cases during compared to before the pandemic, among individuals under 19 years of age.
Researchers set the primary outcome as the change in the incidence rate of paediatric diabetes from before and during the pandemic. The secondary outcome was the change in the incidence rate of DKA among youths with new-onset diabetes during the pandemic.
In total, 42 studies with 102,984 incident diabetes cases were included in the analysis.
The type 1 diabetes incidence rate was 14% higher during the first year of the pandemic compared with the pre-pandemic period (incidence rate ratio, IRR = 1.14 95% CI 1.08 – 1.21). Nevertheless, this rate increased further during months 13 to 24 of the pandemic compared to the pre-pandemic level (IRR = 1.27, 95% CI 1.18 – 1.37). There was also a higher incidence of DKA compared to before the pandemic (IRR = 1.26, 95% CI 1.17 – 1.36).
The underlying mechanisms responsible for this observed increase are unclear and require further investigation.
29th June 2023
AEF0117 is a novel agent signalling specific inhibitor of the cannabinoid receptor 1, which, in a double-blind, placebo-controlled randomised trial, reduced the positive subjective effects of cannabis in patients with cannabis use disorder (CUD).
Although most people using cannabis do not have problems related to its use, between 10% and 30% of users report symptoms consistent with CUD. Most of the effects of cannabis linked to the psychoactive component tetrahydrocannabinol (THC) are mediated via interaction with the type 1 cannabinoid receptor, which has become a promising drug target.
Currently, there are no effective treatments for CUD, but AEF0117 appears to selectively inhibit a subset of intracellular effects resulting from THC binding without modifying behaviour. Moreover, AEFO117 potently inhibits the effects of THC without producing any psychoactive effects.
Writing in the journal Nature Medicine, researchers initially found that AEF0117 decreased cannabinoid self-administration and THC-related behavioural impairment, but without producing significant adverse effects. Based on these initial phase 1 findings, they tried using the drug in patients with CUD.
In a randomised, double-blind, placebo-controlled, crossover phase 2a trial, researchers randomised volunteers with CUD to two ascending-dose cohorts (0.06 mg and 1 mg), which were given every day.
The primary outcome was the effect of AEFO117 on cannabis’ positive subjective effects, which were measured using a visual analogue scale. The results showed that the drug significantly reduced the positive subjective effects of cannabis by 19% with the 0.06 mg dose but 38% for the 1 mg dose, compared to placebo (p < 0.04).
In addition, a 1 mg dose of AEFO117 also reduced cannabis self-administration (p < 0.05). It appeared to be well tolerated and did not precipitate cannabis withdrawal.
A study by German researchers has shown that an ultrahigh resolution coronary CT angiography can enable clinicians to diagnose coronary artery disease in high-risk patients prior to transcatheter aortic value replacement.
Patients with severe aortic stenosis and who are suitable for transcatheter aortic valve replacement (TAVR), often have co-existing coronary artery disease (CAD). Moreover, guidelines recommend that patients suitable for TAVR undergo an assessment for CAD. Although coronary CT angiography can be used to assess the whether a patient has CAD, it often overestimates the extent of disease in high-risk patients. With the introduction of ultrahigh resolution coronary CT angiography, it may be possible to identify CAD in these high-risk patients.
Writing in the journal Radiology, the German team set out to determine the diagnostic accuracy of the new ultrahigh resolution coronary CT angiography for the detection of CAD compared with the reference standard of invasive coronary angiography (ICA).
The team examined participants with severe aortic valve stenosis and who were clinically indicated for transcatheter aortic valve replacement. All participants underwent ultrahigh-resolution photon-counting CT angiography and invasive coronary angiography, with the latter being part of their clinical care. The image quality was assessed on a 5-point Likert scale, where one was excellent and five non-diagnostic.
In addition, clinicians, who were blind to the ICA findings, assessed for the presence of CAD, defined by a 50% or greater stenosis. The ultrahigh resolution CT and ICA findings were compared using area under the receiver operating characteristic curve (AUC).
A total of 68 patients (mean age 81 years, 52.9% female) with severe aortic valve stenosis and an indication for TAVR were included in the comparative analysis. The median image quality score was 1.5 with most segments rated as either good or excellent quality.
The ultrahigh resolution CT AUC for the detection of CAD was 0.93 (95% CI 0.86 – 0.99) at the participant level, 0.94 (95% CI 0.91 – 0.98) at the vessel level and 0.92 (95% CI 0.87 – 0.97) at the segment level. This gave rise to a 96% sensitivity, an 84% specificity and 88% accuracy for the detection of coronary artery disease.
Although seemingly impressive results, the researchers called for confirmatory research with more subjects to improve the generalisability of their findings, as well as larger trials with patient-related end points to determine the potential clinical benefits of photon-counting CT.