This website is intended for healthcare professionals only.

Newsletter      
Hospital Healthcare Europe
HOPE LOGO
Hospital Healthcare Europe

Press Releases

Take a look at a selection of our recent media coverage:

Machine learning model detects COVID-19 after 3 days of self-reported symptoms

4th August 2021

Using self-reported symptoms, a machine learning model was able to predict the early stages of COVID-19 infection after only three days.

The timely detection of COVID-19 infections through PCR testing is vital to contain the spread of the virus. However, while PCR testing has become the most widely used analytical technique to detect the virus, the result is highly dependent on the timing of sample collection, the type of specimen and the quality of the sample. An alternative means of identifying infected individuals is through a combination of symptoms and then ensuring that only those with appropriate symptoms are tested. This approach was used in an Italian study of nearly 3000 subjects and with the aid of a short diagnostic scale, was able to correctly identify the symptoms associated with infection. This same methodology is utilised in the COVID-19 Symptom Study App which is a longitudinal, self-reported study of the symptom profile of patients with COVID-19. Through the use of machine learning models, the study has been able to develop models to identify the main symptoms of infection and their correlation with outcomes. Nevertheless, current models are not conducive to the early detection of infection. This prompted the COVID-19 Symptom Study team to create a machine learning model that captured self-reported symptoms for only the first three days and used this information to predict an individual’s likelihood of being COVID-19 positive.

The team used three different machine learning models to analyse self-reported symptoms. The first model was based on the NHS algorithm which uses the presence of cough, fever or loss of smell between days 1 and 3 as potentially representative of COVID-19 infection. The second logistic regression model, is based on an algorithm which incorporates loss of smell, persistent cough, fatigue and skipped meals and which has been previously validated and found to correlate well with COVID-19 infection. For the third algorithm, the team used 18 self-reported symptoms combined with co-morbidities as well as demographic data and referred to this as a hierarchical Gaussian process model. All three models were compared in terms of sensitivities, specificities and area under the receiver operating characteristics curve (AUC) and evaluated with a training set, for patients self-reporting symptoms between April and October 2020 and a test set for self-reported symptoms between October and November 2020.

Findings
There were data from 182,991 participants in the training set and 15,049 in the test set with a similar symptom distribution. The predictive power of the three model was different. For example, the hierarchical Gaussian process model showed the highest predictive value (AUC = 0.80, 95% CI 0.80–0.81) using three days of symptoms compared to the logistic regression model (AUC = 0.74) and the NHS model (AUC = 0.67). The hierarchical Gaussian process model for prediction of COVID-19 infection had a sensitivity of 73% and a specificity 72%. This was higher than either the logistic regression model (59%, 76%, sensitivity, specificity, respectively) and the NHS model (60%, 75%, sensitivity, specificity, respectively). Interestingly, the key symptoms predictive of early COVID-19 were loss of smell, chest pain, persistent cough, abdominal pain, feet blisters, eye soreness and unusual pain.

The authors concluded that the hierarchical Gaussian process model was successfully able to predict the early signs of infection and could be used to enable referral for testing and self-isolation when these symptoms were present.

Citation
Canas LS et al. Early detection of COVID-19 in the UK using self-reported symptoms: a large-scale, prospective, epidemiological surveillance study. Lancet Digit Health 2021

Machine learning model predictive of mortality in sepsis

26th July 2021

In patients with sepsis, the use of a machine learning algorithm identified six variables that were predictive of 7- and 30-day mortality.

Sepsis can be defined as is a life-threatening organ dysfunction caused by a dysregulated host response to infection. Furthermore, sepsis is responsible for around 11 million deaths each year, which amounts to approximately 20% of all global deaths. Thus, it is crucial that clinicians have a comprehensive understanding of all the relevant clinical factors that can help with the early identification of those patients for whom a poor outcome is likely. This is particularly important since early use of crystalloid therapy reduces mortality, as does prompt administration of antibiotics. Though several scoring systems for sepsis are available, these are based on the assessment of vital signs but which can sometimes be normal upon admission to an emergency department. While machine learning has been shown to have some level of predictive power for mortality, none of the variables currently used in these models are reflective of the symptoms at first presentation. This led a team from the Department of Medical Sciences, Orebro University, Sweden, to use machine learning in an attempt to identify the variables which were predictive of 7- and 30-day mortality in sepsis patients, based on the clinical presentation at an emergency department. They employed a retrospective design and included patients 18 years and older, admitted to hospital with suspected sepsis. The team input previously identified variables, e.g., abnormal temperature, acute altered mental status, etc into the machine learning algorithm. The sensitivity and specificity of the predictive models generated by the machine learning model, were calculated from the area under the receiver operating curve (AUC).

Findings
A total of 445 patients with sepsis and a median age of 73 years (52.6% male) were included in the retrospective analysis. Overall, 234 (49.7%) had severe sepsis and 63 patients died within 7-days of admission and 98 within 30 days. The accuracy of the 7-day predictive model was maximal after the inclusion of only six variables; fever, abnormal verbal response, low oxygen saturation, arrival by emergency services, abnormal behaviour/level of consciousness and chills. Using these variables, the AUC sensitivity was 0.84 (95 CI 0.78–0.89) and the specificity 0.67 (95% CI 0.64 –0.70). For the prediction of 30-day mortality, again, only 6 variables were significant; abnormal verbal response, fever, chills, arrival by emergency services, low oxygen saturation and breathing difficulties. This model gave a sensitivity of 0.87 (95% CI 0.81–0.93) and a specificity of 0.64 (95% CI 0.61–0.67).

In discussing their findings, the authors highlighted how their results revealed the importance of the using a clinical symptom complex that was representative of what an emergency department clinician would be likely to encounter in practice. They also suggested that the 7-day model might be of more use in practice since it would be of assistance to emergency care staff for the likely short-term outcome for patients. They concluded that given how the clinical presentation of sepsis can often be non-specific, the use of a machine learning algorithm, based on symptoms and observations, would be most helpful to staff and that future work should focus on validating the method in other cohorts.

Citation
Karlsson A et al. Predicting mortality among septic patients presenting to the emergency department– a cross sectional analysis using machine learning. BMC Emerg Med 2021

Study shows physicians’ reluctance to use machine-learning for prostate cancer treatment planning

15th June 2021

A study shows that a machine-learning generated treatment plan for patients with prostate cancer, while accurate, was less likely to be used by physicians in practice.

Advancements in machine-learning (ML) algorithms in medicine have demonstrated that such systems can be as accurate as humans. However, few systems have been used in routine clinical practice and often ML systems tested in parallel with physicians and actions suggested by the system not acted upon in practice. To fully utilise ML systems in routine clinical care requires a shift from its current adjunctive support role, to being considered as the primary option. In trying to assess the real-world value of an ML algorithm, a team from the Princess Margaret Cancer Centre, Ontario, Canada, decided to explore the value of ML-generated curative-intent radiation therapy (RT) treatment planning for patients with prostate cancer. The team’s overall aim was to evaluate the integration of the ML system as a standard of care and undertook a two-stage study comprising an initial feasibility to clinical deployment. For the initial validation phase, the team included data from 50 patients to assess the ML performance retrospectively. The researchers delivered ML-generated RT plans and asked reviewers to assess these plans (in a blinded fashion) with the actual plans used for the patient. In the subsequent deployment phase, again with 50 patients, both physician generated and ML generated were prospectively compared, again with the treating physician blinded to the source of the plan.

Findings
The ML system proved to be much faster at generating plans than the equivalent human-driven process (median 47 vs 118 hours, p < 0.01). Overall, ML-generated plans were deemed to be clinically acceptable for treatment in 89% of cases across both the validation and deployment phase (92% duration the validation phase and 86% during the deployment phase). In only 10 cases, the ML-generated method was deemed not applicable because the plans required consultation with the treating physician, thus unblinding the review process. In addition, 72% of ML-generated RT plans were selected over human-generated RT plans in a head-to-head comparison. However, when compared to the simulation and the deployment phase, the proportion of ML-generated plans used by the treating physician actually reduced from 83% to 61% (p = 0.02).

The authors were unable to fully account for these differences and suggested that either retrospective or simulated studies cannot fully recapitulate the factors influencing clinical-decision-making when patient care is at stake and concluded that further prospective deployment studies are required to validate the impact of ML in real-world clinical settings to fully quantify the value of such methods.

Citation
McIntosh C et al. Clinical integration of machine learning for curative-intent radiation treatment of patients with prostate cancer. Nat Med 2021