This website is intended for healthcare professionals only.
Take a look at a selection of our recent media coverage:
14th August 2023
Use of machine learning has enabled scientists to accurately predict four subtypes of Parkinson’s disease based on images of patient-derived stem cells.
Parkinson’s disease is a neurodegenerative condition that affects both movement and cognition. Symptoms and progression vary based on the underlying disease subtype, although it has not been possible to accurately differentiate between these subtypes.
This may well change in the near future as a team based at the Francis Crick Institute and UCL Queen Square Institute of Neurology, together with the technology company Faculty AI, have shown that machine learning can accurately predict subtypes of Parkinson’s disease using images of patient-derived stem cells.
The work, which was published in the journal Nature Machine intelligence, generated a machine learning-based model that could simultaneously predict the presence of Parkinson’s disease as well as its primary mechanistic subtype in human neurons.
In the study, researchers generated stem cells from a patients’ own cells and chemically created four different subtypes of Parkinson’s disease: two involving pathways leading to toxic build-up of the protein α-synuclein and two involving pathways leading to defunct mitochondria. Together, this created a ’human’ model of the disease.
Next, researchers imaged these disease models and ‘trained’ the machine learning algorithm to recognise each subtype, from which it was then able to predict the particular subtype.
The machine learning model enabled researchers to accurately identify a disease state from a healthy control state.
With quantitative cellular profile-based classifiers, the models were able to achieve an accuracy of 82%. In contrast, image-based deep neural networks could predict control and four distinct disease subtypes with an accuracy of 95%.
The machine learning-trained classifiers were able to achieve a level of accuracy across all subtypes, using the organellar features of the mitochondria, with the additional contribution of the lysosomes, confirming the biological importance of these pathways in Parkinson’s disease.
James Evans, a PhD student at the Francis Crick Institute and UCL, and co-first author with Karishma D’Sa and Gurvir Virdi, said: ‘Now that we use more advanced image techniques, we generate vast quantities of data, much of which is discarded when we manually select a few features of interest.
‘Using AI in this study enabled us to evaluate a larger number of cell features, and assess the importance of these features in discerning disease subtype. Using deep learning, we were able to extract much more information from our images than with conventional image analysis. We now hope to expand this approach to understand how these cellular mechanisms contribute to other subtypes of Parkinson’s.‘
Sonia Gandhi, assistant research director and group leader of the Neurodegeneration Biology Laboratory at the Francis Crick Institute, who was also involved in the study, said: ‘We don’t currently have treatments which make a huge difference in the progression of Parkinson’s disease. Using a model of the patient’s own neurons, and combining this with large numbers of images, we generated an algorithm to classify certain subtypes – a powerful approach that could open the door to identifying disease subtypes in life.
‘Taking this one step further, our platform would allow us to first test drugs in stem cell models, and predict whether a patient’s brain cells would be likely to respond to a drug, before enrolling into clinical trials. The hope is that one day this could lead to fundamental changes in how we deliver personalised medicine.‘
10th February 2023
Predicting frequent emergency care use by patients with chronic health conditions with machine learning models does not offer any additional benefit to existing modelling approaches according to the findings of a retrospective analysis by Canadian researchers.
While, not universally accepted, those with at least three or more visits per year have been used to describe a frequent emergency care user. Such individuals often have complex health needs which are not met through primary care provision and consequently their condition deteriorates, leading to an emergency department (ED) visit. Although frequent emergency care users (FECU) represent only a small proportion of the overall population seen in an ED, they do account for a disproportionately large number of visits. Currently, logistic regression models have been used for analysing frequent users in emergency departments. However, the development of machine learning models that can incorporate large amounts of both clinical and non-clinical data, have the potential to help identify FECU individuals. Such models have already been used, for example, in predicting the need for hospitalisation at the time of triage for children with an asthma exacerbation. Nevertheless, no studies have explored the use of machine learning models – and how these compare with logistic regression – to predict frequent emergency care use in adults with chronic conditions.
In the present study the Canadian team retrospectively examined the performance of four machine learning models in comparison to a logistic regression model, for the prediction of frequent ED use in adults with a range of chronic conditions. They identified two cohorts: those who had at least 3 and 5 visits per year. The models used a number of predictor variables including age, gender and residential area and focused on chronic diseases such as coronary artery disease, mental disorders, epilepsy and chronic non-cancer pain. The models were used to predict frequent ED use as a binary outcome, i.e., frequent user or not and the outcomes were compared in terms of the area under the curve (AUC), sensitivity and specificity.
Frequent emergency care user model predictions
The analysis identified 451,775 ED users and of whom, 9.5% had at least three visits per year and 3% five visits.
The AUC for the logistic regression model for frequent users with 3 visits/year was 0.748, giving a sensitivity of 60% and a specificity of 78%. Two of the machine learning models gave a similar AUC (0.749 and 0.744) whereas the random forest model was much worse (AUC = 0.538). For prediction of frequent users with 5 visits/year, the model performance was broadly similar, i.e., machine learning-based models were no better.
Overall, the authors commented on how none of the machine learning models outperformed the logistic regression model and the most important predictor variable was the number of visits in the previous year. The authors did feel that access to more variables could have helped in refining the predictive accuracy of the machine learning models. Nevertheless, they emphasised the need for future work to consider complex non-linear interactions, since in such cases, machine learning models were likely to be superior to existing ones.
Citation
Chiu YM et al. Machine learning to improve frequent emergency department use prediction: a retrospective cohort study. Sci Rep 2023
23rd January 2023
A gradient boosting (XGBoost) model, which made use of both clinical and demographic factors, was able predict the most important factors associated with metformin failure in type 2 diabetics according to work by a team of US researchers.
It has been estimated that globally, some 415 million people are currently living with diabetes and the World Health Organization suggests that more than 95% of those with diabetes have type 2 disease. One of the most widely used anti-diabetic agents is metformin and the drug is suggested as a first-line treatment either alone or in combination for those with type 2 disease. Nevertheless, some evidence highlights that monotherapy with metformin is associated with treatment failure. In one study, for instance it was found that, the proportion of patients able to a achieve an HbA1c of below 7% in the first year, ranged from 19 to 86% of those started on metformin. Understanding the factors linked to a respond to metformin can therefore help to personalise medicine and allow for an early adjustment of therapy. However, determining which specific factors are relevant to metformin treatment failure from an examination of a patient’s electronic health record (EHR) is challenging. In an attempt to identify relevant predictors held within the patient’s EHR, in the current study, the US researchers made use of a machine learning model and turned to a patient cohort with at least one abnormal diabetic result (e.g., elevated fasting glucose or HbA1c) that lead to the initiation of metformin treatment. For the purposes of they study, the team defined treatment failure as either an inability to achieve a target HbA1c or < 7% within 18 months of initiation or the addition of other pharmacological agents during the same time frame. Many of the EHR factors were assimilated into the model and included demographics (age, gender, ethnicity), lifestyle factors (smoking status), body mass index as well as laboratory findings such as lipid profiles, blood pressure and liver function tests. The predictive value of the model was assessed using the C-index and individual predictors using Shapley Additive Explanations (SHAP), for which higher values indicated a more important contribution to the model.
Model predictors and metformin failure
The study included 22,047 patients with a mean age of 57 years (48% female) who were started on metformin, Using the target of an HbA1c of below 7%, the overall metformin failure rate was 33% and the median time to failure was 3.9 months.
When the XGBoost model included baseline values of HBA1c, age, gender and ethnicity, it had a high discrimination performance for the risk of metformin failure (C-index = 0.73, 95% CI 0.72 – 0.74). There were a total of 15 different influential factors identified that impacted on metformin failure, the most important of which, was the baseline HBA1c value (SHAP value = 0.76). In contrast, factors such as age (SHAP = 0.016) or body mass index (SHAP = 0.041) were less important. Nonetheless, incorporation of each of these 15 factors did improve the model’s performance (C-index = 0.745, 95% CI 0.73 – 0.75, p < 0.0001).
The authors concluded that although baseline HBA1c was the most important factor in metformin failure, adding other important and readily available variables to the model, improved its performance. They suggested that is was therefore possible to easily identify patients most at risk of metformin failure and who would benefit from closer monitoring and earlier treatment modification.
Citation
Bielinski SJ et al. Predictors of Metformin Failure: Repurposing Electronic Health Record Data to Identify High-Risk Patients. J Clin Endrocrinol Met 2023
24th October 2022
Machine learning diffusion tensor imaging models have the potential to screen for brain changes associated with the presence of sleep apnoea.
Two machine learning diffusion tensor imaging models were able to successfully distinguish between healthy controls and patients with obstructive sleep apnoea, according to a study by US researchers from California.
Obstructive sleep apnoea (OSA) is an extremely common condition with a 2019 study estimating that globally, 936 million adults aged 30-69 years have mild to severe disease.
OSA leads to a significant impairment in quality of life related to physical functioning, as well as causing daytime sleepiness, decreased learning skills and neuro-cognitive deficits that include impaired episodic memory, executive function, attention and visuospatial cognitive functions.
The gold standard diagnostic test for OSA is overnight polysomnography which requires a dedicated sleep laboratory and trained staff to interpret the results. In contrast, magnetic resonance imaging (MRI) and, in particular, diffusion tensor magnetic resonance imaging, has already been used to reveal how global brain mean diffusivity values are significantly reduced in OSA compared with controls.
The increased use of machine learning models with various imaging modalities, led the US researchers to wonder if a machine learning diffusion tensor imaging model might be able to detect the brain changes associated with OSA. After all, this approach had already been successfully used to identify major depressive disorder.
For the present study, the team focused to two types of machine learning models; a support vector machine (SVM) and a random forest (RF) to assess mean diffusivity maps from brain MRI scans. Both models were trained and compared for their ability to accurately identify OSA and cross-validated within the training dataset.
The researchers recruited 59 patients (mean age 50.2 years, 61% male) with OSA and who had been recently diagnosed through overnight polysomnography with at least moderate disease severity and who were also treatment naive. In addition, 18 OSA and 29 control patients who were not included in the training set and to assess the predictive accuracy of the models.
The cross-validation process showed that the accuracy of the SVM model was 0.73 whereas the RF model was 0.77, i.e. both models showed similar fitting accuracy for OSA and control data. Similarly, the area under the receiver-operator curve was 0.85 for the RF model and 0.84 for the SVM model.
The authors concluded that both the RF and SVM models were comparable for the diagnosis of OSA and suggested that either could be used as a screening tool for OSA in patients where diffusion tensor imaging data was available.
Citation
Pang Bo et al. Machine learning approach for obstructive sleep apnea screening using brain diffusion tensor imaging J Sleep Res 2022.