Artificial intelligence is a complex phenomenon. It will impact the way medical research is conducted, how biomedical data are used, and how healthcare professions and organisations are regulated.
In 2021, the European Hospital and Healthcare Federation (HOPE) published a position paper on artificial intelligence (AI). Here, the organisation’s chief executive Pascal Garel provides an update on this and outlines recommendations on how to ensure that the application of AI in healthcare benefits patients and consumers alike.
What would HOPE envisage as the essential components of a European-wide operational definition of artificial intelligence?
A Europe-wide operational definition cannot be implemented as what is perceived as ‘being for the common good’ in one sector might be ethically unacceptable in another. A rigid technical definition risks excluding less complex AI-based systems from the legal framework. An insufficiently clear definition could also inspire different legal interpretations at a national level, thereby defeating its purpose.
That is why HOPE is advocating a health sector-specific approach to AI. Personal health data are a particularly sensitive category. Leakage or misuse could lead to severe consequences and negative health outcomes. Members of vulnerable groups are especially powerless when it comes to refuting AI-enabled results or administrative decisions about entitlements. Safeguarding fundamental rights, data and privacy protection, and ensuring the safety and security of individuals contributing or using data, are essential.
While a risk-based approach as outlined in the European Commission’s AI Act has its merits, particularly in sectors where AI deployment is straightforward and the risks of abuse are minor, in a health context the nuances are stronger. Extra care is needed to prevent seemingly ‘low risk’ AI systems inadvertently harming individuals, by revealing their identities or drawing conclusions about them based on biased data. For example, fitness and wellness applications are commercial products and their standards and purposes differ from those of actual medical devices used in healthcare environments.
The uses outlined in the AI Act, as well as in the European Health Data Space (EHDS) are broad, to exploit the market potential of AI solutions as much as possible. Uses must be balanced with ethical and human rights considerations to build support for trustworthy AI.
What are the relevant EU stakeholder groups?
Regarding the definition of AI in health, the health community should be provided with ample opportunities to co-shape AI policies. The debate is not only about technology, but also about the future of health systems and how healthcare is provided. This includes Member States’ ability to protect vulnerable populations from the ‘AI frenzy’ of Big Tech firms eager to reap profits from securing vast amounts of personal health data. It is about the impact of AI on everybody’s lives.
On the other hand, AI holds great potential to improve healthcare provision and health research. It could even ‘re-humanise’ healthcare if (co-)developed and deployed in an ethical, transparent, and inclusive way. It could, for example, take care of routine health administration and documentation tasks and provide crucial decision support in diagnosis, treatment and follow-up. Many medical disciplines could gain from AI support and researchers could uncover links that were hitherto impossible to detect. Discussions should include patients, healthcare professionals, consumers, researchers, public health experts, and representatives of vulnerable groups and human rights groups.
Should the definition of AI include the range of healthcare settings to which it is applicable and of possible benefit?
Specifying the relevant healthcare settings that can benefit from AI would be beneficial to avoid any loopholes where the agreed rules do not apply. These might not need to be part of the definition of AI as such but could be outlined in a healthcare-specific protocol as part of the legislation.
It is difficult to capture the diversity of healthcare provision today as many Member States are experimenting with new models of care to lessen the pressure felt by the hospital sector. The AI legal framework should nonetheless reflect this multiplicity. This includes public, private and other categories of hospitals – whether focusing on providing general or specialised services – nursing homes and other long-term care facilities, outpatient facilities, ‘virtual wards’ and at-home care provision.
Should a legal framework for integration be developed only after specific arenas for which there are clear benefits are defined?
Artificial intelligence is used widely today, and new solutions come to market every week. A legal framework should be devised without delay. This framework needs to balance fostering AI innovation and international collaboration with being mindful of the consequences that could arise from the increased reliance on AI, including physical or mental harm and violation of fundamental rights. AI systems could potentially be adapted to uses other than their declared purposes. They could be deliberately or unknowingly misused and the results biased for various reasons.
Implementing AI on agreed socially and ethically acceptable uses would improve confidence. Instead, the proposed legal framework (AI Act coupled with the AI-relevant provisions in the European Health Data Space) is rather confusing and does not provide adequate information about what trustworthy AI in healthcare will look like in practice as the potential uses remain indistinct.
What are the potential and real barriers to EU-wide adoption of a working definition of AI?
AI serves many different purposes in different fields. Dependence on it will only increase as Europe wishes to reap benefits from harvesting data across sectors. This is one of the biggest real barriers. Different sectors have their own AI needs and have developed their own terminologies, which complicates devising a common working definition. However, a broad and vague overall definition of AI – if supported by more precise sectoral delineations – is still better than a very narrow definition. The latter could lead to exclusion of certain categories of systems or else encourage developers to maintain that their systems do not classify as AI although they contain very similar features.
Creating a legislative framework is not only about terminology, but also about understanding the broader environment in which AI operates. Like any other technology, AI is dependent on infrastructure, the availability of quality data, and financial and human resources. In healthcare, developing – and properly communicating – a clear strategic vision for AI would alleviate common fears. And which would catapult it from ‘science fiction’ to a recognised tool for health systems to improve quality of care.
Another key barrier is that the European AI framework intersects and overlaps with many other existing or proposed EU laws or initiatives covering mandatory responsibilities for manufacturers and users of digital technologies and data. Proper implementation of GDPR must not be hampered by the development of an AI-specific legal architecture. This includes sectoral products (e.g., Machinery Directive, General Product Safety Directive) and legislation dealing with data liability and safety (e.g., Data Governance Act, Open Data Directive). Healthcare-relevant EU legislation must also be adapted.
Building a robust and future-oriented cybersecurity legal framework will be especially important for the development and protection of a rights-based, human-centric European approach to artificial intelligence.