Logo and page links

Main menu


Helse Bergen, exit report: The use of artificial intelligence (AI) in the follow-up of vulnerable patients

How can we ensure that the algorithm provides a fair result?

When discussing fairness in this sandbox project, our starting point has been three main principles for responsible AI: it must be lawful, ethical and robust. 

These main principles are based on the “Ethical Guidelines for Trustworthy AI”, prepared by an expert group appointed by the European Commission. The same principles are also reflected in the Norwegian government’s National Strategy for Artificial Intelligence from 2020.

A good starting point for upholding these principles is the performance of a Data Protection Impact Assessment (DPIA). A DPIA is a process intended to describe the processing of personal data and assess whether or not it is necessary and proportional. It will also help to manage the risks that such processing poses to the individual's rights and freedoms, by assessing these and determining risk-reducing measures.

Data Protection Impact Assessment (DPIA)

If it is probable that a type of processing of personal data will entail a high risk to people’s rights and freedoms, the controller must assess the planned processing activity’s impact on privacy. The Norwegian Data Protection Authority has drawn up a list of processing activities that always trigger the need to perform a DPIA. The list is available on the Data Protection Authority's website. Among other things, it states that the processing of personal data by means of innovative technologies, such as AI, as well as the processing of special categories of personal data (such as health data) for the purpose of algorithmic training, require a DPIA to be performed.

In its DPIA, Helse Bergen identified several different risk factors. Two of these in particular indicated a risk that the solution may not adequately fulfil the requirement for fairness:

  1. False negative and false positive results

    The risk that the algorithm predicts so-called false negative or false positive results. This means that someone who should have been given additional follow-up does not receive it, or that a person who does not need additional follow-up is offered it. In the first instance, the patient will be offered a follow-up that accords with current practice, which cannot be said to be associated with a high risk. The second instance represents no risk to the patient, but is undesirable from the perspective of the hospital’s use of resources.
  2. Demographic distortion

    The risk that the algorithm discriminates against certain groups in society. If the algorithm prioritises certain groups, other groups will feel they are being marginalised or subjected to unfair and inequitable treatment.

Below, examples from the DPIA discussion will be used to illustrate how fairness and data protection have been built into the algorithm.

Built-in data protection

In its guidelines, the European Data Protection Board (EDPB) states that built-in data protection is one of several factors included in the fairness principle, alongside respect for the data subject’s rights and freedoms, such as freedom from discrimination, the data subject’s expectations and any broader ethical implications of the data processing. Built-in data protection has been a recurring theme in this sandbox project, both in discussions relating to discrimination and distortion in the algorithm and in discussions relating to how patients’ rights and freedoms under the data protection regulations may be upheld.

Read more about built-in data protection in the EDPB’s guidelines.

Article 25 of the GDPR establishes a duty to ensure effective data protection in the development of solutions or technical systems through the implementation of technical and organisational initiatives – in other words, a duty to ensure built-in data protection. On its website, the Norwegian Data Protection Authority underlines that the requirement for built-in data protection must be met before personal data are processed, and that mechanisms to ensure built-in data protection must be maintained for as long as the processing takes place.

Helse Bergen has itself developed the algorithm used to predict the risk of readmission. Helse Bergen has therefore had plenty of opportunities to plan necessary measures, such as data minimisation, pseudonymisation and data security initiatives, to meet the requirements for built-in data protection from the outset. Below, we present some examples of relevant measures in this project.

Data quality and the requirement for data minimisation

The Norwegian National Strategy for AI highlights distortions in the underlying data as a particular obstacle to inclusion and equitable treatment. This is explained as follows: “datasets used to train AI systems may contain historic distortions, be incomplete or incorrect. Poor data quality and errors will embed themselves in the algorithm and may lead to incorrect and discriminatory results.

One way of avoiding distortion in the selection (bias) is to ensure that the underlying data are adequate and relevant for the algorithm’s predefined purpose. The data minimisation principle states that personal data may be lawfully processed only to the extent necessary to fulfil the intended purpose. The data minimisation principle restricts the use of large quantities of personal data in the development of an algorithm if the purpose may be achieved using a smaller dataset.

The purpose of Helse Bergen's algorithm is to quantify the risk that a patient may be readmitted to hospital in the future. As mentioned earlier, Helse Bergen planned to use only historic data that has proved to have a statistical correlation with the risk of readmission. The underlying data comprises the previous admission records of a large number of patients.

The project quickly found that a few key parameters made the algorithm’s predictions as accurate as the use of a large number of parameters with little relevance for readmission. The data variables currently being used include previous admissions, number of bed days, gender, age, indicators of urgency and the number of primary and secondary diagnoses. Information about the patient’s diagnoses is listed only as a number and is not specified by type. In addition, Helse Bergen decided that the algorithm should be used only in connection with those patient groups where there was a risk of frequent readmission, with the focus on the patients who had recently been admitted to hospital.

The algorithm will be trained continuously through the input of new data from the patient records system DIPS. To ensure that the Care Pathway Database is updated at the time the algorithm makes its predictions, the algorithm must be run frequently. In this way, any changes in DIPS will be included in the basis for the decision.

The algorithm’s accuracy

There will always be a risk of the algorithm predicting so-called false negative or false positive results. This is an issue to be found in the vast majority of algorithms which use AI. In this case, it means that someone who should have been given additional follow-up does not receive it, or that a person who does not need additional follow-up is offered it. In the first instance, the patient will be offered a follow-up that accords with current practice, which cannot be said to be associated with a high risk. The second instance represents no risk to the patient, but is undesirable from the perspective of the hospital’s optimal use of resources.

Because the underlying data change over time, there is a risk that the algorithm's accuracy will also change. To uncover any reduction in accuracy or the emergence of distortions in the data on which the predictions are based, Helse Bergen plans to perform routine quality assurance. In the longer term, Helse Bergen will be able to extract historic data about which patients were readmitted and document the extent to which the algorithm succeeded in identifying them in advance.

Correct use of the algorithm in practice

The algorithm developed by Helse Bergen is intended to be used as a decision-support tool and will be supplemented by the healthcare professionals’ own expert assessments. However, there will always be a risk that the algorithm’s output may be used uncritically and function in practice as a fully automated system. A decision without real human intervention will, in practice, come under the prohibition against fully automated decisions established in Article 22 of the GDPR.

To ensure human intervention in the decision, Helse Bergen plans to draw up a set of uniform guidelines after involving user committees. Questions to be clarified include how the result should be interpreted and the weight that should be attached to the recommendation. Increased awareness of the algorithm’s accuracy on the part of healthcare personnel will make it easier to uncover any errors or distortions that may arise, and then make the necessary adjustments.