Logo and page links

Main menu

How should we structure the sandbox?

To make sure the sandbox on artificial intelligence is as effective and relevant as possible, we collected feedback from a wide range of organizations and networks. Below is a summary of the feedback we received from these external organizations. 

This feedback was collected through bilateral meetings, dialogue and workshops. Anyone could register for these meetings, and it was also possible to submit feedback via e-mail. Overall, we have communicated with an estimated 60 organizations and networks, from both the private and the public sectors.

We learned a great deal from this round of consultations, and we have summarized the feedback we received into three themed sections:

  • Which types of topics are relevant?
  • How should we structure the sandbox to make sure it is effective and inclusive?
  • Which types of “output” or results would be useful for others working on artificial intelligence (AI)?

We plan to actively apply the feedback in these areas when we prepare the framework for the sandbox. This includes the application process, project methods, and communication during and after a sandbox project.

Who provided feedback?

We reached out to a wide range of organizations, from small start-ups to major commercial concerns, from research projects and academia to public agencies.

Below is a list of the organizations we consulted with:

  • City of Oslo, Education Agency
  • Fürst Med. Lab. AS
  • City of Bergen
  • Fremtind
  • Norwegian Research Center for Computers and Law, University of Oslo
  • Hastings AS
  • Equality and Anti-Discrimination Ombud
  • Schibsted
  • KS — Norwegian Association of Local and Regional Authorities
  • NAV — Labour and Welfare Directorate
  • Norwegian Cognitive Center
  • Norwegian System of Patient Injury Compensation
  • Bolder Technologies AS
  • Goscore AS
  • National Criminal Investigation Service (Kripos)
  • Innovation Norway
  • Norwegian Police Shared Services
  • Norwegian Digitalisation Agency
  • Norwegian Directorate of Health
  • Brønnøysund Register Centre
  • Avinor
  • Norwegian Board of Technology
  • Norwegian Bar Association
  • Norwegian Consumer Council
  • Salient World AS
  • Centre for the Science of Learning & Technology (SLATE) — University of Bergen
  • Advokatfirmaet Wiersholm AS
  • Department of Informatics — University of Oslo
  • IOTA Foundation
  • Oslo Police District
  • Norwegian Association of Lawyers’ Tech Forum (JF-Tech)
  • Financial Supervisory Authority of Norway
  • Finance Sector Union of Norway
  • ICT Norway
  • Confederation of Vocational Unions (YS)
  • ICO, UK
  • The Norwegian Society of Engineers and Technologists (Nito)
  • National Archives Services of Norway
  • Negotia
  • Innovation Norway
  • Norwegian Digitalisation Agency
  • Brækhus Advokatfirma AS
  • SINTEF Digital and SINTEF Health and Well-Being
  • Norwegian Computer Society
  • Telenor
  • Oslo Metropolitan University
  • Oslo University Hospital, Ullevål
  • Norwegian Directorate of eHealth
  • Simula
  • Norwegian Artificial Intelligence Research (NORA)
  • The NORDE network
  • DNB

Which topics should the sandbox highlight?

To ensure we select projects with relevance beyond the individual organization, it has been useful to find out which topics stakeholders in the industry are focused on. The sandbox is well suited for highlighting problems in areas where there is uncertainty concerning how to interpret and apply relevant regulations. Through sandbox projects, we have the opportunity to showcase real-world examples of how to interpret the regulations, and which considerations should underpin decision-making in various projects.

Some topics that were brought up during the consultation round:

  • Explainability

How can we ensure that the customer/user actually understands what is happening to their personal data when it is collected and used in systems based on AI, and how do we explain why the outcome is what it is in an open and transparent way that is also compliant with the Personal Data Act? 

  • Access to data

In order for AI to work efficiently and yield good results, we need access to a many data sets of high quality. This is true for both the public and the private sector. Different perspectives may be relevant, such as data sharing between multiple organizations or making national data sets available.

  • Fairness principle

Perhaps the sandbox can help clarify borderline cases, e.g. in connection with new technology, particularly in terms of clarifying the principles of fairness and lawfulness. How far does the fairness principle inherent in the Personal Data Act go?

  • Distinction between the development and use of AI

Whether what is being done  is classified as use or development will determine what the organization is allowed to do with the data. In artificial intelligence, the line between development and use is blurred. Several stakeholders expressed a wish to see examples of what this line could look like in practice.

  • Consent

AI opens up new opportunities and new ways to use data may have major benefits. How can consent be used to establish a lawful basis for the reuse of data? Is it possible to grant consent retroactively? How can we use consent given in the nineties today, when we now use new technologies with new possibilities?

  • Anonymization

If data is completely anonymized, the requirements of the Personal Data Act do not apply. When is data anonymous? How are these assessments handled? When is and is not pseudonymization sufficient?

  • Purpose limitation

At the start of an AI project, we often do not know which types of data we are going to need. When a purpose must be defined in advance, this may limit the possibilities and potential value of AI technology. Could a solution be to adopt broader definitions of purpose?

  • Discrimination/bias

We have seen several examples of AI yielding results that have discriminated on the basis of gender, race or where someone lives. How do we design solutions that eliminate discrimination and ensure fair treatment?

  • Access control and monitoring

The use of AI may contribute to a more efficient use of resources in access control and monitoring. How can we achieve this in a way that protects individual privacy?

  • Health

In the health sector, there is a vast potential for solutions that could contribute to more efficient and better health services. Several stakeholders brought up concerns related to health, including consent, access to data, data sharing, and the use of sensitive data.

  • Principle of necessity

The use of AI challenges the principle of necessity, because new ways of analysing and using data become possible. Which considerations should be taken into account in determining whether it is necessary to use data to fulfil a purpose? And where do we draw the line for what is expected processing?

  • Data minimization

In the use of AI, more data often yields more accurate results and less bias. How do we balance this with the data minimization principle, and what do we do when we have a lot of data we don’t know whether we are going to use?

This list is not exhaustive, but highlights selected themes that were common in all of the feedback we received.

How should we structure the sandbox?

The Data Protection Authority wants the sandbox to be available to all types of organizations working with or interested in AI. It is therefore important that we develop a process that is accessible and does not exclude any types of organizations. The feedback we received helps us build a sandbox that is as efficient and unbureaucratic as possible. In addition, we also received a lot of feedback on creative ways to get as much out of each individual project as possible.

Below is a selection of comments on how best to organize the sandbox:

  • Represent different stakeholders

The selection of projects should be organized so as to ensure that private companies, large quoted companies, academia and public agencies are all represented.

  • International collaboration

The conclusions from the sandbox should be harmonized with regulations, as practised, in other European countries.

  • Flexible definition of AI

The Data Protection Authority should adopt a broad definition of AI. Instead of applying a narrow definition of AI, the Data Protection Authority should select projects involving issues that are relevant for AI. For example, projects focused on autonomy may include relevant issues, even if they do not use AI directly. 

  • Transparency

The Data Protection Authority should aim to get all organizations involved to be generous when it comes to sharing knowledge and information. Examples and assessments from sandbox projects may be useful and instructional for other organizations that face similar problems.

  • Informal inspection

An informal inspection of a solution could be a useful way to highlight what the Data Protection Authority would emphasize when assessing whether the solution is compliant with relevant regulations.

  • Interdisciplinary approach

Problems involving AI are complex and multi-faceted. An interdisciplinary approach is essential, and it should be possible to involve technologists, lawyers, social scientists, philosophers, etc., in the sandbox projects.

  • User involvement

It is important to take the user perspective into account in the design of good products and services. Is it possible for the sandbox to involve users to clarify reasonable expectations? For example through focus groups or user forums.

  • Efficient application process

Small and medium-sized organizations have limited resources — the application process should be quick and not too demanding. In order to get these types of organizations involved, the application process should not be too bureaucratic.

  • Involve external stakeholders

If necessary, sandbox projects should involve external stakeholders. These could be adjoining supervisory authorities or public agencies, experts in a field or networks with expertise of relevance for the project.

  • Flexible project methods

There should be a flexible approach to project execution, where the applicants themselves can help assess what would work best for their project. Different organizations have different needs and focus on different things.

This list is not exhaustive, but highlights selected themes that were common in all of the feedback we received.

What kind of guidance should the sandbox generate?

A central objective for the sandbox is to ensure that experiences and examples from sandbox projects benefit others as well. The Data Protection Authority is planning to convert what we learn in the sandbox into guidelines. We asked the external stakeholders we contacted which types of guidelines would be most relevant for them.

Below is a selection of the feedback we received concerning the type of guidelines that could be generated by the sandbox:

  • Build networks and forums

The sandbox should facilitate the reinforcement of networks between organizations working on the same types of issues, so that participants may learn from each other. Organize events, record podcasts, etc., to facilitate continued debate. A healthy debate involving many parties promotes learning.

  • Presentation of results on a joint website

A news site, updated regularly, with conclusions, assessments and examples from the sandbox would be a good idea, so others can read more about the topics that are discussed. The Norwegian Institute of Public Health’s website on the Smittestopp app was mentioned as an example.

  • Practical problems need practical solutions

The focus should be on practical problems and addressing the types of challenges organizations face when they have a specific, practical problem. It is important that the outcome is not a general guide, but that it includes specific examples of how to interpret the regulations in practice.

  • Public good

Select projects with a potential for public good, not financial profit. Projects should address issues that may benefit all, and not just promote corporate growth.

  • Good examples

The sandbox should provide good examples of how to interpret the regulations in practice. We need translations from legal to technical. But we also need a translation from legal to legal, meaning clear descriptions of what is permissible and what is right.

  • Framework or standard for AI

If we could establish a standardized framework for how to process personal data with the help of AI, that would be immensely useful.

  • Ethics

There is some uncertainty concerning the role of ethics in AI and how to interpret this in practice. Participants suggested creating a library of ethical dilemmas or an ethics checklist as part of the sandbox.

  • Failed projects

It would be a good idea to share examples where organizations tried various solutions that failed to satisfy regulatory requirements.

This list is not exhaustive, but highlights selected themes that were common in all of the feedback we received.