Logo and page links

Main menu


NAV - exit report

Fairness

How can one ensure that algorithms give fair results? What factors in the different parts of a development run can lead to injustice or discrimination? Ensuring that a machine learning model acts fairly and does not discriminate is a challenging task. Nevertheless, it is a requirement that all processing of personal data must take place in a fair manner, and that the outcome of the model's calculations does not discriminate.

Approach to fairness

When we have discussed fairness in this sandbox project, we have taken as our starting point three main principles for responsible artificial intelligence: lawful, ethical and robust. These main principles are based on the “Ethics guidelines for trustworthy AI”, prepared by an expert group appointed by the European Commission. The same principles are also reflected in the National strategy for artificial intelligence.

In its guidelines for integrated personal data protection, the European Data Protection Board (EDPB) lists several aspects that are included in the fairness principle, among them non-discrimination, the expectations of the registered person, the process’ broader ethical issues and respect for rights and freedoms.

Read Guidelines 4/2019 on Article 25 Data Protection by Design and by Default | European Data Protection Board (europa.eu)

The fairness principle contains several more aspects in addition to non-discrimination. Discrimination in algorithms is a familiar challenge in artificial intelligence and the sandbox work has therefore had focus on this. A major public body such as NAV has a particular responsibility to be aware of the imbalance in power manifested in interactions between users and NAV’s systems.

The fairness principle is a central element in other legislations, among them various human rights provisions and the Equality and Anti-Discrimination Act. These statutes could also have a bearing on the question of fairness and their requirements might also be more or less stringent than the provisions of data protection legislation.

NAV’s model

NAV has developed methods that enable the fairness of the model to be tested. The main focus has been on the bias of the model, i.e. potential biases in data collection, the choice of variables, model selection or implementation and how these are manifested in skewed outcomes and possible discriminatory effects. Machine learning models will inevitably treat persons differently, as the desire for a more user-adapted differentiation often motivates the development of a machine learning model. Avoiding arbitrary discrimination was one of the central themes in this sandbox project. NAV does not wish to reproduce or strengthen existing biases, but risks doing exactly this if bias is not analysed and addressed.

To support this analysis, NAV wishes to evaluate what a fair algorithm outcome involves in a legal sense. Developing a machine learning model that addresses several legal requirements for fairness involves operationalising legal and ethical principles. (In addition to GDPR, NAV must comply with regulations in the Public Administration Act, NAV Act and the Equality and Anti-Discrimination Act.)

To evaluate whether the model is consistent with the concepts of fairness in the legislation, it is useful to clarify how the model will function when it is put into production. What kind of outcome might, for example, groups with special requirements for protection against unfair discrimination, expect to see?

NAV itself points out that this kind of analysis does not cover all the ways in which the processing of personal information can be unfair or discriminatory. However, focusing on the outcome (regardless of issues associated with, for example, data collection, processing and practical model application), facilitates a discussion of how the fairness concept must be interpreted and how it can be operationalised.

In the operationalising of the fairness evaluation, NAV has elected to focus on outcome fairness, i.e. whether the outcome of the model is distributed fairly across various groups. The evaluation is comparative, i.e. it examines how various groups that are part of the model are processed compared to each other, rather than measured against a standard or norm. NAV has also concluded that model errors resulting in the convening of unnecessary dialogue meetings are less serious than the contrary. One of the starting points for evaluating fairness in the prediction model is the National Insurance Act Section 8-7a, which instructs NAV to hold a dialogue meeting “except where such a meeting is considered to be clearly unnecessary”. This type of requirement suggests that in cases of doubt, one dialogue meeting too many should be held rather than one too few.

From a personal privacy perspective, fairness must be evaluated both at a group level and an individual level. The model may also conflict with the fairness principle if only individuals are negatively affected to a significant degree and not solely if group discrimination occurs—for example, if there are rare combinations of factors that lead to very negative effects for the registered person.

Moreover, one can envisage that the prediction of the length of sickness absence for certain groups will be erroneous in terms of evaluating when a dialogue meeting should be convened. For example, this could apply in circumstances where the future length of sickness absence is not the best evaluation factor for a decision as to whether a dialogue meeting is ‘clearly unnecessary’ and where based on a fairness perspective, such case histories might need to be identified to avoid this kind of imbalance. For example, one can envisage situations in which several pregnant women have long periods of sickness absence where it is still clearly unnecessary to hold dialogue meeting 2. The same might apply in the case of partially disabled persons who will be on sick leave for one year from confirmation of their residual work ability percentage, with the future objective of full disability pension.

Other aspects

The model that has been discussed in the sandbox is a decision-making support system. This means that the prediction will be one of multiple information elements that form part of the adviser’s evaluation. If a fully automated decision is made, a new fairness evaluation must be carried out. At the same time, it is important to remember that humans also discriminate. It is therefore by no means certain that the actual outcome for the registered person will be made fairer by the presence of a person in the loop. Nevertheless, it can be experienced as more intrusive to be unfairly treated by a machine learning model than by an adviser. In addition, any unfair practices exhibited by the model will scale in a completely different way than the current system and lead to systematised unfairness. A new evaluation of the registered person’s rightful/reasonable expectations of processing will likely become even more important in a fully-automated model. This also applies to revision and control of the algorithms.

Who has a right to special protection?

The method that has been chosen to evaluate the machine learning model’s outcome fairness, requires NAV to define which groups should be evaluated against each other. As a starting point, there are an arbitrary number of user groups that can be defined based on the user mass that forms the data basis for training the model. Which groups should be included in a fairness evaluation of the model is a question with several different social, historical and societal dimensions. NAV exists for everyone; however, it is neither technically nor practically possible to perform an evaluation for all group identities in Norwegian society. Who has the right to or a particular need for protection against biased model outcomes, is thereby a key issue.

A large part of this question falls more naturally within the realm of the Equality and Anti-Discrimination Act, and as part of the sandbox work, we invited the Equality and Anti-Discrimination Ombud to discuss these issues.

In principle, the groups that NAV utilises—including gender, age and diagnoses—are well founded in the Equality and Anti-Discrimination Act. It is possible, that in addition to the defined groups, complex discrimination bases will also occur, in which a combination of group identification generates a particularly biased result. There are also other vulnerable groups that it might be be useful to include, such as persons dependent on intoxicants, persons with care duties and persons with a low economic status.

A central question in connection with discrimination is whether this type of prediction model differentiates in such a way that it can be called discrimination. As the specific model being evaluated is concerned with the length of sickness absence and deals with whether a dialogue meeting should be held or not, this discrimination threshold will not necessarily be reached. The situation is likely to be different in the case of a model for other types of benefits with greater consequences for the registered person.

The conflict between personal privacy and fairness

In all machine learning models, a tension can arise between the model’s mode of operation and several personal privacy principles. In the NAV project, this type of tension arises when NAV must fulfil its obligation to check whether the model functions in a biased manner or discriminates. In principle, personal information needs to be processed to both uncover and correct outcome bias. Admittedly, uncovering bias in the model’s outcomes can be done regardless of whether group identification is part of the model. However, to carry out an evaluation of the model’s outcome, group identification must be used. Finally, it can be possible to comply with other requirements for information fairness without this type of processing of personal information. These questions are key for developers of responsible AI, and the EU’s proposal for new AI legislation touches on these questions.

Read The European Commission's proposal for a new regulation pertaining to artificial intelligence Article 10 no. 5. Extract from EUR-Lex - 52021PC0206 - EN - EUR-Lex (europa.eu).

NAV's services must be accessible to the entire population, and NAV must therefore navigate the tension between personal privacy and biased outcomes in each model that is developed. In addition, there is a major overlap between groups that personal privacy regulations define as vulnerable and groups that are covered by the Equality and Anti-Discrimination Act.

When considering the fairness of the model, there is, viewed from a personal privacy standpoint, a difference between utilising information that is already part of the model and utilising new information that in principle is not used in the model, but that is incorporated in the analysis in order to check for discriminatory outcomes. A tension arises between protection of privacy and fairness when the method for uncovering and combating discrimination involves complex processing of special categories of personal information. Information that is already included in the algorithm forms part of the decision-making basis in the follow-up of sickness absence. Entirely new information, on the other hand,  requires a new assessment of legality. In addition, it is likely that the registered person has a rightful expectation that information that is irrelevant to an evaluation of whether a dialogue meeting should be held, will not be utilised in the model. One can envisage that the use of anonymised or synthetic data could offer a solution that could uncover outcome bias, whilst at the same time safeguarding personal privacy. Fully anonymised data is not considered to be personal information and therefore personal privacy legislation does not apply. However, this is something that we have not discussed extensively in the sandbox.

There is not necessarily an adequate answer to the question regarding the conflict between personal privacy and fairness in a machine learning model. Equally, however, it is a central part of the discussion about and the work towards responsible artificial intelligence.