Artificial intelligence (AI) is revolutionising histopathology by enabling quicker, more consistent analysis of digitised tissue slides to aid cancer diagnosis. To keep pace with the surge in AI research within oncology, Dr Heba Sailem and her team at King’s College London have created an innovative dashboard – HistoPathExplorer – that allows users to generate customised searches, compare AI tools and explore emerging trends to enhance patient care.
Artificial intelligence (AI) in histopathology enables the automated analysis of digitised tissue slides, helping to detect microscopic patterns in tumour images, quantify features and support diagnostic decisions with greater speed and consistency.
These tools can assist in tasks such as tumour classification, biomarker detection and grading, offering scalable solutions to help pathologists and oncologists in routine practice.
While there is rapid development in AI capabilities, oncologists and pathologists face growing challenges in keeping pace with advancements in AI in their specialities. The field evolves quickly, with new AI models and studies emerging constantly, making it challenging to stay updated.
The overwhelming volume of publications and technical jargon adds to the difficulty. Even when relevant studies are identified, comparing them to generate meaningful evidence can be tricky due to differences in methods and reporting standards.
Dataset challenges
Another key issue is identifying relevant datasets for training AI algorithms. Many models are developed using datasets that may not reflect real-world patient diversity, limiting their generalisability and raising issues of bias.
Inconsistent reporting of performance metrics and a lack of external validation further complicate interpretation. For example, some studies only report the area under the Receiver Operating Characteristic curve – representing the model’s ability to distinguish between positive and negative cases – but do not report specificity and sensitivity.
Even when several performance metrics are reported, they can vary from dataset to dataset, raising the question about the best way to compare and assess models in a healthcare setting. Another major challenge is the ‘black box’ nature of many AI systems and the lack of transparency in how they make decisions, leading to hesitation in clinical adoption.
There is also a notable lack of AI insight in rare and less common cancers. Most AI models are trained on large datasets derived from more prevalent cancer types.
We analysed over 1,500 studies and showed that 47% focused on breast, lung, or colorectal cancers, with higher data availability. This creates a gap in innovation for cancers with fewer cases, limited annotated data or complex histological subtypes.
Moreover, certain types of clinical data, such as long-term follow-up or detailed treatment response information, are often less available, which limits the development of AI tools beyond diagnosis. As a result, 75% of the studies we reviewed focused primarily on cancer detection and subtyping, with far fewer addressing prognosis or treatment planning – equally critical areas for improving patient outcomes.
Defining a quality index for evaluating AI methodologies
Many of the studies we reviewed lacked or omitted important details. Some, for example, reported only a single performance metric or gave limited information about the AI architecture used. Moreover, as we develop AI models, we regularly benchmark our methods against available code and datasets, which also support the reproducibility of the work. This led us to define an index to help oncologists and engineers determine the completeness of methods and clinical applicability of AI tools described in published studies.
The index has five features that assess whether a study reports at least three performance metrics for a comprehensive evaluation; includes benchmarking against other models; provides access to implementation details, such as code and data for reproducibility; uses external validation to ensure generalisability; and clearly describes the methodology, pre-processing steps and model architecture. This structured approach enables users to quickly assess the robustness and clinical readiness of AI models, supporting more informed and confident decision-making.
Developing HistoPathExplorer
My group is working at the forefront of AI development in histopathology to support clinicians in making faster and more accurate patient diagnoses. We recognised the increasing number of published papers in AI for digital pathology in the past few years, reaching an average of one paper per day. This made it difficult to determine the best AI methods to exploit and the clinical areas with unmet needs.
To this end, we developed HistoPathExplorer to accelerate AI research in histopathology and its translation to the clinic. This online dashboard curated data from more than 1,500 articles and was designed to address four key goals:
- To enable users to instantly identify, evaluate and compare relevant studies and deep learning approaches that represent the current state of the art across various pathological applications
- To help uncover the factors that contribute to enhanced AI performance, such as dataset characteristics, annotation quality and model architecture
- To offer a platform for gaining a deeper understanding of both the challenges and opportunities that exist in improving these tools for clinical translation, ranging from generalisability issues to regulatory and workflow integration
- To support decision-makers by facilitating the rapid synthesis of evidence, helping inform clinical policies and guidelines.
By providing in-depth details of AI studies, the HistoPathExplorer dashboard empowers clinicians, researchers and policy stakeholders to make informed decisions about adopting and implementing AI in cancer diagnostics.
Translating AI research into clinical practice
HistoPathExplorer bridges the gap between academic AI research and clinical practice by making complex models more accessible and interpretable to oncologists and pathologists. The platform helps to identify relevant datasets, including those from different countries, which ensures AI tools are evaluated fairly across diverse populations.
The dashboard also assists decision-makers in assessing the reliability, applicability and evidence behind AI tools by offering transparent benchmarking and a structured quality index. This enables faster synthesis of findings and more informed decisions around implementation and translation. Additionally, by highlighting where most AI efforts are concentrated, the platform reveals underexplored areas, guiding researchers toward unmet clinical needs and supporting the definition of standards for reproducibility, reporting and validation.
HistoPathExplorer was designed with this multidisciplinary need in mind. By providing an accessible, interactive platform that showcases a wide range of published AI models and methodologies in histopathology, it creates a shared space for discussion and learning across disciplines.
The platform enables clinicians to explore how different AI tools perform across diagnostic tasks, with clear explanations of methodologies and quality indicators that make the information more interpretable for non-technical users. For AI researchers, it offers insights into unmet clinical needs and real-world challenges, fostering the development of more targeted and usable models. Pathologists can better assess how well models align with diagnostic workflows and where human expertise is still critical.
Additionally, by identifying publicly available data from different countries, HistoPathExplorer encourages global dialogue, enabling researchers and clinicians to learn from diverse datasets and enhance model generalisability. This cross-border collaboration can bridge gaps between engineers and clinicians, fostering the co-design of AI tools better suited to clinical environments. Ultimately, we believe that this collaborative effort will improve diagnostic accuracy and patient outcomes, accelerating AI adoption in oncology.
Driving the next wave of progress with HistoPathExplorer
Ultimately, HistoPathExplorer provides a central, user-friendly resource that enables clinicians, researchers and policymakers to navigate the fast-moving AI landscape in oncology and contribute to its safe and effective clinical translation.
To further develop HistoPathExplorer, we aim to allow users to explore, compare and apply AI models on publicly available histopathology data. This extension will enable users to visualise model outputs, understand spatial tissue features and link findings to clinical variables without requiring programming skills. This approach could significantly advance AI deployment in histopathology and have a transformative impact on the field.
Author
Heba Sailem MSc PhD
Senior lecturer in biomedical AI and data science, School of Cancer and Pharmaceutical Sciences, King’s College London, UK