Logo and page links

Main menu

NAV

NAV will use artificial intelligence to predict the length of sick leave for people on sick leave. The purpose is to get a more user-friendly and efficient follow-up of sick leavers, by avoiding arranging unnecessary meetings. By being able to predict who is back at work and not at a given time, they will be able to concentrate their efforts where it is needed most.

The law requires NAV to conduct dialogue meetings with sick leavers, their doctor and employer within week 26. In the current situation, the NAV supervisors must already in week 17 assess whether there will be a need for such a meeting, in practice try to predict whether the sick leaver is back at work or not nine weeks later. The project's hypothesis is that too many unnecessary dialogue meetings are held, which take time from everyone involved. And that the AI-based system can give good support when supervisors decide whether the the meeting is needed or not.

Accountability

As a major public player, NAV is aware of its special responsibility, and wants to explore and use AI in a responsible manner. The use must not only be legal, but also arranged in accordance with societal ethical norms, in a way that meets the individual with respect and dignity, and not least builds trust. Through participation in the sandbox, NAV wants to bring concrete assessments of legality, justice and explanability to the market. They hope to be able to use the AI solution more quickly, and with greater confidence that the assessments and practice are in line with both regulations and good practice.

In the final phase

NAVs project is nearing completion, and it will be interesting to see if the way they have interpreted the law, when they want to use artificial intelligence on sensitive personal data, is inside. The personal information they will feed the model with includes diagnosis, degree of sick leave, place of residence, occupation, age or which doctor has signed the sick leave.

- It will be exciting to join the sandbox, and contribute so that maybe others can learn a little from what we do as well, says Lars Sutterud, data scientist at NAV.

Updates

May 2021:

The NAV project has prepared a project plan that follows three tracks: basis for treatment, fairness and explanability. First and foremost, they must clarify whether there is a need for a clear legal basis for the method itself. Does NAV need an explicit legal basis for predicting and / or using machine learning? Does it make a difference whether they implement an expert-based rule engine that provides recommendations, or whether it is based on data and machine learning?

In the next phase of the sandbox project, NAV's assessment of fairness for the machine learning model will be examined. Has NAV chosen good / correct justice goals for the outcome, and are they in accordance with legal requirements? Who is particularly entitled to protection, and how can discrimination against these be assessed in practice? For example, is it advisable to use special categories to evaluate discrimination against them?

Finally, there is explanatory in focus. Then the questions are, among other things, how to arrange explanations for different users in order to accommodate sound management both at individual case level and system level? What does a good system-level explanation look like?