This website is intended for healthcare professionals only.
Take a look at a selection of our recent media coverage:
15th December 2021
This is the conclusion from a study by a team from Department of Internal Medicine, The Ohio State University Wexner Medical Center, Ohio, United States.
In patients with a viral infection, the inflammatory response leads to be production of several volatile compounds which can serve as biomarkers for the disease.1 These volatile compounds can be identified with electronic detection devices (so-called ‘e-noses’) which contain an array of sensors that are optimised for a different chemical range. For example, an e-nose makes use of nanoparticle sensors which contain organic molecules with functional groups that react with volatile organic compounds in a breath sample, leading to a volume change such as swelling or shrinking, and which can be subsequently detected by a sensor.2
But rather than relying on a multiplex-array for the detection of volatile compounds in breath, the US team developed a unique, single, selective chemo sensor. With the current breathalyser once breath enters the device, the sensor interacts with the gas molecules producing an electrical signal. The sensor itself is composed of a catalytically active semiconductor material, based on tungsten trioxide and which specifically targets nitric oxide and ammonia in breath. In fact, the sensor was developed in 2008 and was shown to be still able to detect minute levels of nitric oxide gas in the presence of interfering volatile organic compounds.3
For their study, the team recruited participants, with a positive PCR test for COVID-19 and who were admitted to an intensive care unit receiving mechanical ventilation. Breath samples were collected on days 1, 3, 7 and 10 of the study or until patients no longer required mechanical ventilation.4
A total of 46 patients were included, 23 of whom were COVID-19 positive and the others who were all negative and served as a control group. Among the COVID-19 positive cohort, the median age was 61 years (61% male). Using the breathalyser, the authors identified three typical analytical patterns which they termed the NO-pattern, NH3/O2-pattern and the omega pattern, which was specific to patients with COVID-19 and arose from the interaction between nitric oxide, ammonia and oxygen in the exhaled samples. This omega pattern was detectable within 72 hours of the onset of respiratory failure and identified in 14 of those with confirmed COVID-19 on day 1 but only 4 of those in the control group and this difference was statistically significant (p < 0.0001).
Interestingly, the authors also reported on how as patients’ clinical symptoms resolved, the omega pattern disappeared, and they generally transitioned to the NO pattern through the course of their illness.
The authors determined that the omega pattern had a sensitivity of 88% and a specificity of 83% for COVID-19 on day 1 and concluded that future studies are needed to determine which other diseases or infections might benefit from their technology.
This was the finding of a study by a team from Barts Cancer Institute, London, UK. Docetaxel is recommended as a treatment option for men with hormone-resistant metastatic prostate cancer (mHSPC) as it improves overall survival (OS) and time to disease progression.1 The drug is also effective in metastatic castration-resistant prostate cancer (mCRPC).2
However, not all patients respond to chemotherapy with docetaxel and some become treatment-resistant. The team from Barts wondered if it was possible to somehow identify those patients who would become treatment resistant. They turned their attention to CTCs, which are a subset of cells in the blood which serve as a metastatic seed for cancer as it spreads. The researchers collected blood samples from patients with both mHSPC and mCRPC and used the Parsortix system to separate the CTCs from the blood sample.3
A total of 56 men with advanced prostate cancer were included and at total of 205 samples were obtained from 44 mHSPC and 12 mCRPC patients. The presence of CTCs were detected in 61% of samples and 75% of patients with progressive disease (PD) had a positive score prior to docetaxel chemotherapy.
Analysis of pre-chemotherapy CTCs revealed a significant inverse correlation of CTC parameters with OS and progression-free survival (PFS). A CTC positive score and, in particular, the presence of several subtypes of CTC (e.g., cytokeratin, CK) had the most significant correlation with overall survival. For instance, in mCRPC patients, the correlation of CTC score with OS was -0.85 (p = 0.0095), as was a high total CTC number and OS (-0.69, p = 0.031). In addition, the number of CK+CTCs were significantly correlated with OS in both mHSPC and mCRPC (-0.46, p = 0.0013).
Using Kaplan-Meier analysis of CTCs before chemotherapy showed that both a positive CTC score and the presence of greater than 1 CK + CTC, significantly predicted poor OS in both prostate types (p = 0.011) and OS (p = 0.0018) and progression-free survival (p = 0.024) in mCRPC alone. In contrast, with greater than 5 CTC was the best predictor of poor OS (p = 0.019) in mHSPC.
In conclusion, the researchers wrote that ‘These findings provide a promising potential solution to predicting and monitoring DOC resistance using a non-invasive and easily repeatable system.’
The use of whole genomic sequencing was found to enable a definite or probably genetic diagnosis in 31% of patients with suspected mitochondrial disorders according to a study published in the BMJ.1 Mitochondria are the fundamental cellular energy system in cells and disorders of the mitochondrial system can affect several organ systems that use a large amount of energy including the brain, nerves and muscles.2 In fact, evidence suggests that such disorders are estimated to affect around 1 in 4300 adults.3 Mitochondria have their own DNA (mtDNA) although the majority of proteins which make up the structure of mitochondria are created in the nucleus (nDNA). A mitochondrial disorder can be defined as any disorder which affects the structure of function of the mitochondria and can arise from mutations in either mtDNA or from within the nuclear genome.
Current methods for diagnosing mitochondrial diseases rely on biochemical screening of blood, urine or cerebrospinal fluid, followed by next generation sequencing of mtDNA and nDNA.4 While sequencing the protein coding regions of all genes (or exome sequencing) can be effective at identifying mitochondrial genetic diagnoses, this approach fails to identify 40% of cases.5
For the present study, a team from Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK, set out to determine whether whole genomic sequencing could define the molecular basis of suspected mitochondrial disorders. Using the 100,000 Genomes project, the team recruited patients referred for further testing and with a suspected multi-system, progressive disorder involving the central nervous system, the neuromuscular system or both. DNA samples were extracted from a blood sample and sequenced.
A total of 345 individuals with a median age of 25 (54% female) from 319 different families were included and had their whole genome sequenced. The researchers identified a definite or probable genetic diagnosis in 31% (98/319) of families. The team made a definite genetic diagnosis in 28% of families which included 14 diagnoses (4% of the 319) that provided only a partial explanation of their clinical symptoms. In total, 95 different genes were implicated and interestingly, of the 104 families given a diagnosis, only 38% were in genes known to cause a primary mitochondrial disease (6 mtDNA and 30 nDNA). The remaining 63% had a genetic diagnosis based on non-mitochondrial genes.
The authors commented on how the use of whole genome sequencing as a first-line approach to genetic testing has not been fully explored and felt that around 90% of mitochondrial disorders could be detected using this approach. They suggested that whole genome sequencing is a useful diagnostic test for those with mitochondrial diseases and which should be offered early in the diagnostic patient pathway.
Globally, colorectal cancer (CRC) is the third most common cancer in men and the second most common in women, with over 1.8. million cases diagnosed in 2018.1 CRC is diagnosed by pathologists after a digital examination of a full-scale whole slide image (WSI). However, a major diagnostic challenge is the complexity of these WSIs; in particular, their very large size (>10,000 x 10,000 pixels), complex shapes, textures and histological changes after staining, all of which make the diagnosis both complex and time-consuming. In addition, with a recognised international shortage of pathologists combined with increasing levels of cancer, necessitates the introduction of supportive systems to help manage the workload. One increasingly used form of support is artificial intelligence systems, based, for instance on deep learning (DL). In fact, DL systems have already helped in the classification of CRC types,2 detection of tumour cell type3 and even prediction of survival outcomes.4
Using a supervised learning (SL) recognition system for CRC, a team from Department of Biomedical Engineering, Hunan, China, developed an AI system which was highly accurate for the diagnosis of CRC.5 Nevertheless, an accepted limitation to their system was that it was built using patches of scans previously labelled by pathologists but in practice, scans are often no comprehensively labelled which reduces the utility of the system in a real-world setting. In an effort to refine their system, the researchers investigated the value of semi-supervised learning which does not rely as much on labelled data.
For their current study,6 the team used 13,111 WSIs collected from 8803 patients to train and test the semi-supervised learning (SSL) model. It was evaluated by comparison with an SL model for patch level recognition, with patient level data and with experienced pathologists. In trying to further demonstrate the value of their SSL model, the team used samples of lung cancer and lymphoma and used the area under the receiver operating characteristic curve (AUC) to assess the performance of their model.
When comparing the SSL and SL model for patch-level recognition, the SSL model was superior, with the corresponding AUC values being 0.91 vs 0.79 (SSL vs SL, p = 0.017). Again, with patient-level data, the SSL model was superior with AUCs of 0.97 vs 0.81 (SSL vs SL, p = 0.002). Finally, the SSL model was comparable to that of six experienced pathologists, with AUCs of 0.97 vs 0.96 (SSL vs pathologists). Using both lung and lymphoma samples the SSL model was able to achieve similar performance to the SL model.
In discussing these findings, the authors commented on how their SSL model was able to outperform the SL model at patch-level recognition, even when there was only a small amount of labelled data. They added that the patient level data suggested that the SSL model might not require as much labelled data as the SL model.
In their conclusion, they felt that future work needs to focus on annotation and more effective use of unlabelled data to improve the efficiency of AI systems.
A rare disease is defined as one affecting fewer than 1 in 2000 people in Europe1 and while much progress in recent years has enabled the identification of the genetic basis of many rare diseases, the underlying cause remains to be determined in at least half of these diseases. In an attempt to better understand the genetic basis for rare diseases, the 100,000 Genomes Project was established in the UK in 2017.2
The 100,000 Genomes Project trialists are a group of UK researchers from various locations and who have performed a pilot study to investigate the value of whole genome sequencing in patients with an undiagnosed rare disease.3 The investigators included patients with an undiagnosed rare disease which had not been diagnosed after receiving usual care in the National Health Service. For the genomic sequencing, the researchers investigated both coding and non-coding regions of DNA to identify ‘novel diagnostic variants’ in genes that had not been previously described in the literature.
The pilot enrolled 4660 participants which included 2183 probands (i.e., the first individual to receive any genetic counselling or testing) with a median age at recruitment of 35 years, of whom, 52% were male and 2477 family members. A wide range of rare diseases were included, involving neurological, ophthalmologic, dermatological and those representing intellectual disability and a metabolic disorder.
Based on whole genomic sequencing, the researchers were able to make a genetic diagnosis in 25% (535/2183 cases) of probands, of which 60% (322/535) were made on the basis of coding SNVs (single nucleotide variants) or indels (insertions and deletions) in the applied panels; additionally, 26% (141/535) were established based on coding SNVs or indels affecting well established disease genes outside of the applied testing panels. Furthermore, 10% of the probands had a variant of unknown significance in genes determined by geneticists at the recruiting site. The highest clinical categories of diagnostic yield were neurologic or development disorders (653), ophthalmologic disorders (321) and renal or urinary tract disorders (175).
The researchers identified that 35% of genetic diseases were considered to have a monogenic cause and 11% a more complex cause. A further important finding was that the 25% of genetic diagnoses (i.e., 134/535) were reported by clinicians to be of immediate clinical actionability and of value to patient care, for example, eligibility for a gene-replacement trial, whereas only 0.2% were described as of no benefit.
The authors concluded that their study supported the case for genomic sequencing in the diagnosis of certain, specific rare diseases and hoped that these findings would enable other healthcare systems to consider genome sequencing for patients with rare diseases.
Cognitive decline is a key feature of neurodegenerative diseases and is often the first symptom experienced by patients and which develops slowly over time. However, the use of biomarkers that are elevated at the “pre-cognitive” decline stage could theoretically serve as a means of identifying those at risk of future and further decline.
The use of biomarkers for the early detection of diseases has advanced in recent years for example in Alzheimer’s disease,1 and one area of research has focused on microRNAs, which are small (19–22 nucleotide), non-coding RNA molecules which regulate gene expression and protein homeostasis through binding with a target mRNA.2 A further advantage of microRNAs is their stability in cell-free environments and that these have been implicated in the cause of Alzheimer’s disease3 and cognitive disturbances.4
For the present study,5 the German team used a maze test as a mouse model for age-associated memory decline with both young and older animals. The results showed that the older mice displayed behaviour that was indicative of cognitive decline. When analysing blood samples, it became clear that there were elevated levels of three microRNAs that were only present in the older mice. But whether these markers were also present in humans was unclear but using blood samples from adults with mild cognitive impairment, the researchers also found a significant elevation of the same three-microRNA signature which were absent in matched healthy controls.
In an attempt to strengthen the association between the three-microRNA signature and cognitive decline, the German team used blood samples from patients in a study examining subjective cognitive decline. Interestingly, they found that patients with initial mild cognitive impairment who later developed Alzheimer’s disease, had a significantly higher expression of the three-microRNA signature. Further confirmation of the importance of the microRNA signature came from an analysis of cerebral spinal fluid (CSF) of those with cognitive impairment which also demonstrated significantly higher levels, confirming that the biomarker was present in both blood and in CSF.
Because there was elevation of the three-microRNA signature, the team wondered if inhibition of the three-microRNA might reduce the degree of cognitive decline. Returning to the mice, the team injected an inhibitory mix into both older and younger mice and found that the older mice displayed an improved ability to escape the maze that was comparable to the younger animals. Finally, using a mouse disease model of Alzheimer’s disease, the team also demonstrated that the inhibitory mix appeared to ameliorate memory impairment in those with the Alzheimer’s disease model.
Although these were preliminary findings, the authors concluded that the screening approach ‘could improve the early detection of individuals at risk for pathological cognitive decline and increase the chance for efficient therapeutic intervention’.
The National Institute for Health and Care Excellence (NICE) has given the green light to crizanlizumab for the management of sickle cell crises in people with sickle cell disease.
The term sickle cell disease1 describes a group of inherited red blood cell disorders that affect haemoglobin and, for which, according to the World Health Organization, approximately 5% of the world’s population carries trait genes.2 Sickle cell disease affects around 1 in 500 African American children and 1 in 36,000 Hispanic American children and is characterised by a change in the shape of red blood cells which become more ‘sickle-like”, reducing their flexibility of the cells.3 These sickle-like red blood cells can lead to recurrent and unpredictable blockage of small blood vessels4 producing ischaemic pain, referred to as vaso-occlusion (VOC) or sickle cell crises. In addition, activated and adherent leukocytes are the likely drivers of VOC in collecting venules and this process appears to be initiated by a transmembrane protein, P-selectin.5 Studies have shown that blockage of P-selectin appears to improve blood flow,6 and thus reduce the risk of VOC and sickle cell-related pain crises.
The monoclonal antibody crizanlizumab binds to P-selectin, thereby blocking its action. The approval by NICE was based on data from the SUSTAIN trial.7 This double-blind, randomised, placebo-controlled, Phase II trial, assigned 198 patients to either a low-dose crizanlizumab (2.5mg/kg body weight), a high-dose crizanlizumab (5.0mg/kg), or placebo and which were administered intravenously 14 times over a period of 52 weeks. The primary outcome was the annual rate of sickle cell–related pain crises with high-dose crizanlizumab versus placebo. For the study, this was defined as acute episodes of pain caused by a VOC that resulted in a visit to a medical facility and treatment with pain relief medication. The results showed that the median rate of crises per year was 1.63 with high-dose crizanlizumab versus 2.98 with placebo (p = 0.01). In addition, the median time to the first crisis was significantly longer with high-dose crizanlizumab than with placebo (4.07 vs. 1.38 months, p = 0.001), as was the median time to the second crisis (10.32 vs. 5.09 months, p=0.02). In addition, the overall incidence of serious adverse event was comparable across the three arms.
NICE recognised that a limitation of the trial was the absence of longer-term data on crizanlizumab, such as mortality or among those who did not seek medical advice on VOCs. There was also no data on the prolonged treatment benefit and what happens when patients stop taking crizanlizumab.
While the appraisal document8 concluded that “crizanlizumab is not recommended for routine use in the NHS”, the drug could be used where specific criteria are met. Thus, guidance notes that “crizanlizumab is recommended as an option for preventing recurrent sickle cell crises (vaso-occlusive crises) in people aged 16 or over with sickle cell disease only if the conditions in the managed access agreement are followed.”
Provision of interferon before transplantation to patients with acute myeloid leukaemia (AML) either not in remission or treatment-resistant, led to reduction in the rate of disease relapse after six months and which was sustained for at least 12 months after the transplant.
This was the finding from a clinical trial conducted by a team from the Division of Haematology-Oncology, University of Michigan, US.1 Among patients with AML, the most potent therapy is haematopoietic stem cell transplantation (HSCT). Nevertheless, HSCT is deemed to be more toxic than both chemo- and immunotherapy and thus considered as an option for those in whom the estimated survival time and quality of life exceed other treatment.2 The rationale for HSCT after chemo- or radiotherapy is that the donor T cells, either alone or possibly in combination with other immune cells, help to eliminate any residual leukaemia cells in the recipient; and this response is known as graft-versus-host leukaemia.3 However, despite the expectation that HSCT is curative, relapse is common with up to 50% of patients experiencing a relapse and 2-year survival rates are below 20%.4
For the present study, the US team conducted a Phase I/II trial to evaluate the safety and efficacy of subcutaneous, long-acting formulation of type 1 interferon in reducing relapse among high-risk AML patients, i.e., those not in remission, when they received HSCT. The interferon was administered the day before HSCT followed by three further
injections every 14 days. In the first part of the trial, the team determined the maximum tolerated dose of interferon which was 180 micrograms. For the second part of the trial,
the primary efficacy endpoint was the cumulative incidence of relapse at six months post-HSCT. The team also considered overall survival (OS) and leukaemia-free survival (LFS) as secondary outcomes.
A total of 36 patients with a median age of 60 years (39% female) were included in the full study although data for 31 were reported in the Phase II analysis. The cumulative incidence of relapse was 39% (95% CI 24–58%) which was sustained at 1-year. The OS in Phase II was 55% at 6-months and 33% after 2-years. In addition, LFS was 48% at 6-months and 28% after 2-years and there were no differences in either OS or LFS by age or donor type. The authors also reported no apparent safety concerns from using interferon.
Commenting on their findings, the authors noted that with prior studies indicating a relapse incidence of approximately 60%, interferon use appeared to reduce relapse in high-risk patients with AML after HSCT by 20%. Furthermore, OS also appeared to be improved at 33% compared to previously reported figures of between 14 and 26%, although this would require confirmation in further studies. They concluded that their data would require validation in a prospective randomised trial.
Leukaemia can be either acute i.e., fast-growing or chronic (slow growing) and there are two main subgroups of acute leukaemia; acute lymphoblastic leukaemia (ALL), frequently diagnosed in children and young adults and acute myeloid leukaemia (AML), which is the most common type in adults.1
Treatments for AML have increased in recent years, although the cancer has a number of phenotypes and both primary and secondary drug resistance is a problem for many patients.2 One form of treatment for promyelocytic leukaemia, which is a subtype of AML, is trivalent arsenic (arsenic trioxide, ATO) which is an apoptosis-inducing agent.3 However, while ATO appears to be effective, a potential difficulty is that the intracellular concentration of the ion within cancerous cells has been found to vary between different form of leukaemia, making it is a less reliable treatment in practice.4
In previous work, a team from China had shown that one common feature of different leukaemia cell lines, is over-expression of cell surface receptor termed ‘human transferrin receptor 1’ (or CD71) and which facilitates the supply of iron into cells upon binding with iron-loaded ferritin. The ferritin molecule itself is a spherical shaped and can serve as a nanocarrier for encapsulated iron oxide particles into peripheral tumours and be used as a treatment for cancer.5 For the latest study, the Chinese team used the ferritin carrier into which they inserted trivalent arsenic and tested whether this arsenic-ferritin complex, was able to deliver the ion into leukaemia cells.6
In their study, the team used animal models to confirm that CD71 was preferentially over expressed on the surface of the cells from different forms of leukaemia, i.e., AML, ALL and chronic myeloid leukaemia in comparison with normal lymphocytes. Secondly, they explored whether the trivalent arsenic could be easily released from the ferritin complex once bound and endocytosed by leukaemia cells. Using HL60 cells, the half-maximum inhibitory concentration value for the arsenic–ferritin complex was 4.9-fold lower than that of conventional arsenic trioxide. Once this had been confirmed, the team measured the concentration of arsenic within cells derived from 167 patients with different leukaemias, it was shown that the arsenic–ferritin complex, released the trivalent ion and led to its accumulation within leukaemia cells.
In their conclusion, the authors noted that the results demonstrated how CD71 was a suitable target because it was preferentially over-expressed in all the forms of leukaemias, even at different stages of the disease.
They suggested that the arsenic–ferritin complex has the potential to become a useful therapy that requires further clinical evaluation in different forms of leukaemia.
A personalised approach to therapy improves progression-free survival (PFS) in patients with advanced and aggressive haematological cancers. This was the conclusion of a small study by researchers from Department of Medicine, Division of Haematology University of Vienna, Austria.
The purpose of precision medicine is to ensure that treatment is individualised and, in recent years, molecular profiling has enabled the gathering of treatment-related information to guide patient management. In the cancerous process, a normal cell undergoes a series of genomic changes leading to the production of aberrant proteins which serve as a target for molecular agents. In fact, evidence suggests that matching therapy to a genetic aberration is associated with a higher overall response rate1 and this treatment matching has already been successfully employed in some haematological cancers such as the BRAF inhibitor, vemurafenib, in hairy cell leukaemia.2
For the present study,3 the Austrian researchers took a slightly different approach and used a technique termed ‘imaged-based single cell functional precision medicine’ (scFPM). This involves removal of cells from a biopsy sample and then analysing the response of target proteins produced by these cells to a range of different anti-cancer drugs, thus helping to guide treatment-related decisions.
Patients included had aggressive haematological cancers and had received at least two standard courses of treatment prior to entry or no standard therapy options. The primary outcome of interest was the progression free survival (PFS) ratio, defined as scFPM/PFS on previous treatment, with the outcome set as a ratio value greater than 1.3 which was considered beneficial.
A total of 143 patients were enrolled, of whom 56 received treatment based on the results of scFPM and 20 received physician-directed treatment and hence served as a control. In the cohort of 56 patients, the median age was 64 years (63% male) and the median follow-up time for all patients was 718 days. Haematological cancers included acute myeloid leukaemia (25%), aggressive B-cell non-Hodgkin lymphoma (46%) and T-cell non-Hodgkin lymphoma (28%) all of which were aggressive and without a standard treatment option.
There were 30 (54%) of the 56 patients who met the primary endpoint of a PFS ratio > 1.3 with a median PFS ratio of 3.4. In other words, patients treated via scFPM guided therapy experienced a three-fold greater PFS response compared to their previous treatment. In addition, 13 (23%) of patients were progression free after 12 months on scFPM guided therapy, compared with only three on previous treatment and the objective response rate was 55% of those on scFPM guided therapy.
In discussing their findings, the authors noted that scFPM guided therapy can be easily incorporated to the clinical workflow and was of benefit to patients with late-stage blood cancer.
They concluded that this initial study has paved the way for prospective randomised trials comparing scFPM-guided therapy with comprehensive genomic profiling as a physician’s choice.