Logo and page links

Main menu

NAV - exit report

NAV - exit report

This sandbox project addresses NAV’s development of an AI tool to predict the development of sick leave on an individual level. NAV joined the sandbox the spring of 2021, and the project was completed during the fall. Here is the exit report.


NAV wishes to use machine learning to predict which users on sick leave will require follow-up two months in the future. This will help advisers to make more accurate assessments, which in turn will help NAV, employers and people on sick leave to avoid unnecessary meetings. The objective of this sandbox project was to clarify the lawfulness of using artificial intelligence (AI) in this context, and to research how profiling persons on sick leave can be performed in a fair and transparent manner.


  1. Lawfulness. NAV has a legal basis for using AI as support in making decisions about an individual’s need for follow-up and dialogue meetings. There is uncertainty about whether the legal basis permits the use of personal information to develop the algorithm itself.
  2. Fairness. There is an important difference between using information that is already part of the model and utilising new information not used in the model, to check for discriminatory outcomes. A conflict arises between protection of privacy and fairness when the method for revealing and combating discrimination involves additional processing of personal information.
  3. Transparency. For the model to provide the desired value, it is essential that NAV advisers trust the algorithm. Insight into and understanding of the mode of operation of the model are important to evaluate the prediction on an independent and secure basis, irrespective of whether the final decision is to follow the recommendation of the prediction, or not.

Going forward

The work on NAV’s prediction model for sickness absence has highlighted a major and important challenge to public authorities seeking to utilise artificial intelligence: The laws that permit the processing of personal information are seldom formulated in a way that permits personal information to be used for machine learning in the development of artificial intelligence. It is important that legislators facilitate future developments of AI in the public sector within a responsible framework. If NAV is to develop the model further, it will be necessary to have a clear and explicit supplementary legal basis, founded in legislation. A legislative process, with the associated consultations and reports, will help to ensure a democratic foundation for the development and use of artificial intelligence in public administration. NAV’s systematic work on the development of a model that meets the requirements for fairness and explainability shows that public sector organisations can serve as driving forces for responsible development in the field of AI.