Logo and page links

Main menu

Ruter, exit report: On track with artificial intelligence

Ruter, exit report: On track with artificial intelligence

Ruter has participated in the Norwegian Data Protection Authority's sandbox for responsible artificial intelligence in connection with their plans to use artificial intelligence in their app. In the sandbox project, the NDPA and Ruter have discussed how they can be open about the processing of personal data that will take place in this solution. A particularly interesting issue relates to how clearly one must delineate the purposes of the treatment in advance. After all, artificial intelligence's strength is to discover new connections and possibilities.

Summary

The public transport undertaking Ruter wants to use artificial intelligence (AI) to provide personalised travel suggestions to their customers in the Ruter app. The travel suggestions will be generated through the use of AI. The desired effect is to increase the use of public transport, micromobility, cycling and walking, which in turn may contribute to achieving climate and environmental goals, including zero growth in private vehicle use.

To ensure that Ruter remains an attractive provider of mobility services in an increasingly more competitive market, further development of the service will be necessary. At the same time, Ruter is reliant on the trust of the general population. Ruter therefore has the goal that any use of the users’ personal data by AI must be responsible and fair.

Together with greater personalisation of the digital products, Ruter needs to provide clear, understandable and user-friendly information about their services. This sandbox project will explore how further development can take place in a manner which ensures that there is transparency and trust associated with the development and use of AI.

Conclusions

  • Transparency during the development phase: For Ruter, it is crucial that customers are confident that their personal data will be processed in a responsible manner. Customer willingness to share personal data is a prerequisite for the development of the AI service. For consent to be valid, customers must also have an insight into and understanding of what they are agreeing to, as well as the option of being able to withdraw their consent. Ruter must therefore ensure they provide adequate information, including about how the AI service arrives at its travel suggestions, without this being overly complicated to understand. In Ruter's service, layered information is a good solution for safeguarding both considerations.
  • Purpose limitation: Ruter wishes to further develop both the AI service in particular and other services in general by using personal data collected through customers’ use of the Ruter app. Ruter needs to clearly define what constitutes the original purpose of the data's collection, and what are separate purposes which they already know when the data is collected that they will want to use the personal data for. If Ruter subsequently sees a need to use the personal data for new, unforeseen purposes, they will have to assess whether the new purposes will be compatible with the initial purpose.
  • Transparency during the usage phase: It is also crucial when the AI service is to be rolled out that there is good, layered information in order to safeguard customer trust and ensure there is valid consent. The data subjects must be informed of each specific purpose prior to the processing of personal data taking place, and consent must be obtained for each new purpose. Transparency will be important for trust in connection with the use of the AI service.

Going forward

The discussions in the sandbox project have contributed towards more clearly defining the requirements for transparency which Ruter has to follow when developing and using AI. The assessments in this report are also relevant to other developers that wish to ensure transparency in their AI solutions.

Ruter has seen that the discussions concerning transparency, purpose and responsibility in the sandbox project are also relevant to other projects that they are working on. They therefore wish to ensure that this knowledge is transferred to other parts of their operations. They will continue to explore the use of AI when they consider that this can make a positive improvement to the customer experience or enable the service to operate more efficiently. The experiences from the sandbox project relating to transparency, purpose limitation and obligation to provide information make the company better equipped to ensure that these services are developed in accordance with the regulations, and customer rights and expectations.

Read more about the road ahead in the final chapter.

What is the sandbox?

In the sandbox, participants and the Norwegian Data Protection Authority jointly explore issues relating to the protection of personal data in order to help ensure the service or product in question complies with the regulations and effectively safeguards individuals’ data privacy.

The Norwegian Data Protection Authority offers guidance in dialogue with the participants. The conclusions drawn from the projects do not constitute binding decisions or prior approval. Participants are at liberty to decide whether to follow the advice they are given.

The sandbox is a useful method for exploring issues where there are few legal precedents, and we hope the conclusions and assessments in this report can be of assistance for others addressing similar issues.