Logo and page links

Main menu

Framework for the Regulatory Sandbox

Framework for the Regulatory Sandbox

Below, we have outlined a framework for our sandbox, where we go over objectives, regulations, requirements and other information potential participants may find relevant.

Objectives and definitions

The regulatory sandbox will provide free guidance to selected private and public organizations of different types and sizes and from different sectors.

Rammeverk for Datatilsynets regulatoriske sandkasse for kunstig intelligens

Vi har laget et rammeverk for sandkassen der vi går gjennom målsettinger, regelverk, krav og annen relevant informasjon for de som ønsker å delta i prosjektet.

Les rammeverket på norsk

The Data Protection Authority would like the sandbox to represent a broad spectrum of organizations, from small start-ups to large public enterprises. All types of enterprises and organizations are therefore encouraged to apply.

More information about the application process is still to come, but there will be multiple application rounds as participating projects are completed. The number of projects accepted will depend on our capacity. 

Overall objective

The overall objective of the Data Protection Authority’s regulatory sandbox is to promote the development and implementation of ethical and responsible artificial intelligence (AI) from a privacy perspective.

The goal is for the sandbox to produce benefits for organizations, the Data Protection Authority, individuals and society in general:

  • For organizations, the regulatory sandbox aims to promote greater understanding of regulatory requirements and how AI-based products and services can meet the requirements imposed by data protection regulations in practice. Examples and experiences from the sandbox will be communicated to the wider public, for the benefit of non-participating organizations as well.
  • For the Data Protection Authority, the regulatory sandbox aims to increase our understanding and knowledge of the practical applications of artificial intelligence. We will use this knowledge to strengthen the Data Protection Authority’s advice, administrative processes, supervisory methods and recommendations to legislators and policy-makers in matters involving AI and privacy.
  • Individuals and society in general will benefit from the development and implementation of AI-based solutions within a framework that emphasizes accountability and transparency and that takes into account the individual’s fundamental rights. This builds a foundation for the development of services customers and inhabitants can trust.

What is artificial intelligence?

The sandbox applies the definition of artificial intelligence used in the National Strategy for Artificial Intelligence (regjeringen.no):

Artificial intelligence systems perform actions, physically or digitally, based on interpreting and processing structured or unstructured data, to achieve a given goal. Some AI systems can adapt their behaviour by analysing how the environment is affected by their previous actions.

- National Strategy for Artificial Intelligence

This definition includes artificial intelligence as applied in a wide range of disciplines:

  • machine learning, e.g. deep learning and reinforcement learning,
  • machine reasoning, including planning, search and optimization
  • certain methodologies in robotics (such as control, sensors, and integration with other technologies in cyber-physical systems).

The sandbox will include both projects seeking to develop new AI solutions and projects involving the use of existing AI solutions.

When is artificial intelligence responsible?

The sandbox operates with three main principles for responsible artificial intelligence:

  1. Lawful — respecting all applicable laws and regulations
  2. Ethical — respecting ethical principles and values
  3. Robust — from a technical perspective while also taking into account its social environment

These main principles are based on the “Ethics guidelines for trustworthy AI” which were prepared by an expert group appointed by the European Commission.
Read the full text of these guidelines on the European Commission website (ec.europa.eu)

With respect to the principle of artificial intelligence being lawful, the sandbox will focus on relevant data protection legislation. Read more about this in the next chapter “What are the relevant regulations?”


As for the ethical principle, the Data Protection Authority has reflected on some requirements included in relevant data protection regulations. This includes fairness, which is both an ethical principle and a principle for the processing of personal data, specified in Article 5 of the General Data Protection Regulation. Requirements for artificial intelligence to be transparent and explicit also follow from both Article 5 of the General Data Protection Regulation and from ethical principles for artificial intelligence.

Requiring decisions based on artificial intelligence to be traceable, explicit and transparent means that it must be possible for the person concerned to gain insight into why a specific decision was made. Traceability makes both audits and explanations possible. Transparency can, among other things, be achieved by providing information about the process to the person concerned. Transparency also means computer systems should not pretend to be human — people have the right to know that they are interacting with an AI system.

Ethical considerations are often a central part of the deliberations that go into interpreting regulations, and sometimes go beyond the regulations. Even if something is permissible under relevant law, organizations should ask themselves whether it is also ethical. One example is the use of data concerning insurance customers. In this day and age, it is possible to gather a lot of information about people who are insurance customers. This information would be detailed enough that a company, with the help of AI, could give every person a quote based on the lifestyle that exact person leads. This is an example of behaviour-based pricing. The legal framework for how far a company may go in segmenting its customers is unclear, and this is where ethics come in: How far should a company or industry go? When does insurance cease being a collective arrangement where 1.000 individuals pay NOK 100 each to make sure the person who needs an operation gets one?

The European Union has initiated a legislative process on artificial intelligence (europarl.europa.eu), which includes an ethical framework for artificial intelligence. The regulatory sandbox will monitor these developments carefully.


The third main principle for responsible artificial intelligence is, as mentioned above, that it needs to be robust. This means that the artificial intelligence must be based on systems with technically robust solutions, to prevent risk and contribute to the systems working as intended. The risk of unintended and unexpected consequences should be minimized. Technical robustness is also important for the accuracy, reliability and verifiability of these systems.

Responsible and trustworthy artificial intelligence is discussed in more detail in Chapter 5 of the National Strategy for Artificial Intelligence (regjeringen.no).

You can also read more about this topic in a downloadable PDF found here: ICDPPC's Declaration on Ethics and Data Protection in Artificial Intelligence (edps.europa.eu) and in the report from the EU expert group, mentioned above. 

What are the relevant regulations?

The Personal Data Act and the General Data Protection Regulation constitute the statutory foundation for activities taking place in the sandbox.

Other data protection regulations over which the Data Protection Authority has supervisory authority and on which the Authority can advise in the regulatory sandbox include the Police Databases Act, the Personal Health Data Filing System Act, the Health Research Act, the Health Records Act and regulations pursuant to the Working Environment Act concerning video monitoring and access to e-mails.
Read more on our page on laws and regulations

When necessary, the Data Protection Authority can work with other authorities to provide recommendations on adjoining regulations. For example, public enterprises must comply with requirements laid down in the Archives Act, Public Administration Act, and the Freedom of Information Act — to name a few.

The sandbox cannot grant exemptions from regulations. The Data Protection Authority has no intention of initiating corrective measures during an organization’s participation in the sandbox. The focus will be on helping participants comply with existing regulations.

What happens in the sandbox?

Project participants will receive advice and guidance from an interdisciplinary team from the Data Protection Authority, to ensure that the service or product is in compliance with relevant regulations and adequately takes privacy into account. The sandbox is open to any and all topics that highlight the use of personal data in artificial intelligence. 

The duration of sandbox participation will vary from project to project, but we believe a project period of 3 to 6 months in the sandbox is appropriate.  

Each organization will, in collaboration with the Data Protection Authority, draw up an individual plan, describing the need for guidance, how this guidance can be prepared and the form it may take.

Our contribution will therefore be tailored to each individual project’s needs — in terms of both scope and activities.

Below are some examples of sandbox activities we can offer:

  • Assist in the performance of a data protection impact assessment (DPIA).
  • Contribute to the identification of data protection challenges.
  • Provide feedback on relevant technical and legal solutions to data protection challenges.
  • Explore options for the implementation of privacy by design.
  • Conduct an informal inspection to highlight relevant requirements.
  • Contribute input to various assessments and considerations of the balance between necessity and potential adverse effects on user privacy.
  • Provide an arena for knowledge exchange and network-building for
    • other sandbox participants,
    • external experts, and
    • other authorities.
  • Share preliminary and final sandbox experiences.

Which topics do we want to highlight?

The Data Protection Authority wants to highlight topics that may be relevant for many. It is particularly interesting to highlight problems in areas where there is uncertainty concerning how to interpret and apply relevant regulations.

Examples of topics the sandbox can help address include:

  • innovative use of personal data with the help of technology that combines artificial intelligence with other technology, such as biometrics, the Internet of Things, portable technology or cloud-based products,
  • complex data-sharing,
  • building a good user experience and trust by providing transparency and explainability,
  • how to avoid discrimination or bias,
  • perceived limitations, or insufficient understanding of the General Data Protection Regulation’s provisions on automated decision-making, and
  • utilization of existing data (often to scale and to connect data) for new purposes.

What if something unforeseen happens?

If there is a need for changes in the project, we can together change the project plan or stop the project for a period. If you no longer need the clarification provided by the regulatory sandbox, you can withdraw from the project at any time.

Aborted projects can also provide learning for others, and The Data Protection Authority will in principle also publish experiences from terminated projects.

In special cases, The Data Protection Authority may itself choose to terminate projects in the sandbox.

What does the Data Protection Authority contribute?

Data Protection Authority staff will provide guidance to sandbox participants. The Data Protection Authority’s project group will include lawyers, technologists, social scientists, and communication consultants — depending on the needs of each individual participant.

If guidance on adjoining regulations is needed, the Data Protection Authority will collaborate with other supervisory or other authorities. It may also be relevant to bring in additional external resources specializing in artificial intelligence or privacy.

You are free to choose whether you want to follow the advice you receive or want to ask for an assessment from others.

The Data Protection Authority will not provide a testing platform or other technical infrastructure, and participants will receive no financial compensation.

General participation criteria

There are some general criteria all sandbox participants must meet. In addition, selection will be based on whether the issues addressed by the project could have relevance for others to achieve the inclusion of a wide range of organizations.

Sandbox projects must:

  1. make use of artificial intelligence or otherwise involve artificial intelligence. Both projects developing new AI and projects using existing solutions based on artificial intelligence are eligible for participation in the sandbox. The development of frameworks or policies for the use of AI are also relevant sandbox themes.
  2. benefit individuals or society in general. This includes products or services that provide health benefits or streamline the use of public resources, or the product could be innovative, with a potential public benefit. In this context, innovation includes technological innovation or innovative new types of services or products. It also includes innovative solutions for the protection of privacy (read more about the term innovation at the bottom of this page).
  3. clearly benefit from participation in the sandbox. This means that the project must involve an issue that is clearly privacy-related, where the type of guidance provided by the Data Protection Authority could be useful. Participating organizations must have a project that is ready for sandbox participation and must allocate sufficient resources for participation in sandbox activities.
  4. be subject to the Norwegian Data Protection Authority as its competent supervisory authority. This means that the organization must be registered in Norway and subject to Norwegian data protection laws. If you are not sure whether your organization meets these criteria, please contact us at for more information.

In selecting projects for participation, the Data Protection Authority will attach importance to whether the project highlights relevant issues, or involves the use of technology and personal data, that may be useful for other organizations. Also, as previously mentioned, we want to include a broad selection of participants from both the private and the public sectors, as well as both large and small organizations.

Furthermore, the Data Protection Authority has a particular interest in addressing issues where there is uncertainty concerning how to interpret and implement regulations.

Innovation means to update or create something new to add value to organizations, society or the general public. Innovation is experimental in nature and solutions are not known in advance.

- Norwegian Digitalisation Agency (digdir.no)


The Data Protection Authority will place emphasis on selecting projects where the knowledge we gain through collaboration with companies in the sandbox, will result in general information and guidance work that benefits the general public.

The Data Protection Authority will continuously publish information from the project, to share experiences from the process itself, what questions we ask along the way and how we answer them. In the end of each project, we will summarize the experiences in a final report.

We want participants to clarify any marketing of the participation in the sandbox with us in advance.

When The Data Protection Authority communicates externally about the sandbox projects, it will be in dialogue with the participants. We want to avoid sharing information that can be considered trade secrets.

Confidentiality, etc.

The objective of the sandbox will be to share as much knowledge and experience from each project as may be useful for other organizations and society in general. At the same time, confidentiality and the protection of trade secrets/intellectual property rights must be taken into account.

All Data Protection Authority employees are subject to Section 13 of the Public Administration Act concerning confidentiality. This may also extend to technical devices and procedures, as well as operational or business matters, which, for competition reasons, it is important to keep secret to protect the interests of the person the information is about – this is called confidential information.

Confidential information shared with the sandbox will therefore be exempt from public access, see Section 13 of the Freedom of Information Act.

Participation in the sandbox does not change intellectual property rights. In other words, you retain the rights you had joining the sandbox collaboration.