Logo and page links

Main menu

Helse Bergen, exit report: The use of artificial intelligence (AI) in the follow-up of vulnerable patients

Helse Bergen, exit report: The use of artificial intelligence (AI) in the follow-up of vulnerable patients

The use of artificial intelligence (AI) makes it possible to identify which patients are at risk of rapid readmission to hospital. Use of such a tool could enable the health service to provide better cross-functional follow-up, in the hope of sparing the patient (and society) from unnecessary hospital admissions. In the regulatory sandbox, the Norwegian Data Protection Authority and Helse Bergen have explored what such a tool should look like in order to comply with the data protection regulations. 

Summary

Helse Bergen wishes to use artificial intelligence (AI) to establish an automated warning system for patients with a high probability of readmission. The warning system is intended to help clinicians identify patients who require additional follow-up to avoid being readmitted to hospital after a short period of time. The objective of the sandbox project was to clarify the legal position regarding the use of AI and to explore how patients’ rights may be protected.

Findings in brief

  • Legality: Helse Bergen has a legal basis for the development and use of AI as a decision-support tool as part of its provision of healthcare services to patients under the EU’s General Data Protection Directive (GDPR) and a supplementary legal basis pursuant to Norway’s healthcare legislation.
  • Transparency: Patients should receive general information that patient data are being used to develop AI tools. In the application phase, an entry should be made in each patient's medical records explaining the result in a manner that is intelligible for both clinicians and patients.
  • Fairness: Distortions in any algorithm’s underlying data can lead to discrimination and prevent patients from being treated equitably. To avoid this, routine quality controls of the algorithm should be performed. The warning system is a decision-support solution. To ensure human control of the decision-making process, uniform guidelines must be drawn up for how healthcare personnel should interpret and apply the results of AI tools used in patient care.

Going forward

This exit report helps to illustrate relevant data privacy considerations and risks associated with the use of AI in clinical care of patients. The project has restricted itself to AI used for clinical decision-support. Fully autonomous AI systems, without human intervention, will require further study.

The project has concluded that, with regard to documenting its efficacy and utility, the same quality standards must be demanded of AI as of other equipment, programmes or procedures that provide decision-support in a clinical setting. Meeting the GDPR’s requirements relating to information, consent and access to one’s own health data lays the foundation for the development and application of new, AI-based clinical tools.

What is the sandbox?

In the sandbox, participants and the Norwegian Data Protection Authority jointly explore issues relating to the protection of personal data in order to help ensure the service or product in question complies with the regulations and effectively safeguards individuals’ data privacy.

The Norwegian Data Protection Authority offers guidance in dialogue with the participants. The conclusions drawn from the projects do not constitute binding decisions or prior approval. Participants are at liberty to decide whether to follow the advice they are given.

The sandbox is a useful method for exploring issues where there are few legal precedents, and we hope the conclusions and assessments in this report can be of assistance for others addressing similar issues.