Objectives and definitions
The regulatory sandbox will provide free guidance to selected private and public organizations of different types and sizes and from different sectors.
Rammeverk for Datatilsynets regulatoriske sandkasse for kunstig intelligens
Vi har laget et rammeverk for sandkassen der vi går gjennom målsettinger, regelverk, krav og annen relevant informasjon for de som ønsker å delta i prosjektet.
The Data Protection Authority would like the sandbox to represent a broad spectrum of organizations, from small start-ups to large public enterprises. All types of enterprises and organizations are therefore encouraged to apply.
More information about the application process is still to come, but there will be multiple application rounds as participating projects are completed. The number of projects accepted will depend on our capacity.
The overall objective of the Data Protection Authority’s regulatory sandbox is to promote the development and implementation of ethical and responsible artificial intelligence (AI) from a privacy perspective.
The goal is for the sandbox to produce benefits for organizations, the Data Protection Authority, individuals and society in general:
- For organizations, the regulatory sandbox aims to promote greater understanding of regulatory requirements and how AI-based products and services can meet the requirements imposed by data protection regulations in practice. Examples and experiences from the sandbox will be communicated to the wider public, for the benefit of non-participating organizations as well.
- For the Data Protection Authority, the regulatory sandbox aims to increase our understanding and knowledge of the practical applications of artificial intelligence. We will use this knowledge to strengthen the Data Protection Authority’s advice, administrative processes, supervisory methods and recommendations to legislators and policy-makers in matters involving AI and privacy.
- Individuals and society in general will benefit from the development and implementation of AI-based solutions within a framework that emphasizes accountability and transparency and that takes into account the individual’s fundamental rights. This builds a foundation for the development of services customers and inhabitants can trust.
What is artificial intelligence?
The sandbox applies the definition of artificial intelligence used in the National Strategy for Artificial Intelligence (regjeringen.no):
Artificial intelligence systems perform actions, physically or digitally, based on interpreting and processing structured or unstructured data, to achieve a given goal. Some AI systems can adapt their behaviour by analysing how the environment is affected by their previous actions.
This definition includes artificial intelligence as applied in a wide range of disciplines:
- machine learning, e.g. deep learning and reinforcement learning,
- machine reasoning, including planning, search and optimization
- certain methodologies in robotics (such as control, sensors, and integration with other technologies in cyber-physical systems).
The sandbox will include both projects seeking to develop new AI solutions and projects involving the use of existing AI solutions.
When is artificial intelligence responsible?
The sandbox operates with three main principles for responsible artificial intelligence:
- Lawful — respecting all applicable laws and regulations
- Ethical — respecting ethical principles and values
- Robust — from a technical perspective while also taking into account its social environment
These main principles are based on the “Ethics guidelines for trustworthy AI” which were prepared by an expert group appointed by the European Commission.
Read the full text of these guidelines on the European Commission website (ec.europa.eu).
With respect to the principle of artificial intelligence being lawful, the sandbox will focus on relevant data protection legislation. Read more about this in the next chapter “What are the relevant regulations?”
As for the ethical principle, the Data Protection Authority has reflected on some requirements included in relevant data protection regulations. This includes fairness, which is both an ethical principle and a principle for the processing of personal data, specified in Article 5 of the General Data Protection Regulation. Requirements for artificial intelligence to be transparent and explicit also follow from both Article 5 of the General Data Protection Regulation and from ethical principles for artificial intelligence.
Requiring decisions based on artificial intelligence to be traceable, explicit and transparent means that it must be possible for the person concerned to gain insight into why a specific decision was made. Traceability makes both audits and explanations possible. Transparency can, among other things, be achieved by providing information about the process to the person concerned. Transparency also means computer systems should not pretend to be human — people have the right to know that they are interacting with an AI system.
Ethical considerations are often a central part of the deliberations that go into interpreting regulations, and sometimes go beyond the regulations. Even if something is permissible under relevant law, organizations should ask themselves whether it is also ethical. One example is the use of data concerning insurance customers. In this day and age, it is possible to gather a lot of information about people who are insurance customers. This information would be detailed enough that a company, with the help of AI, could give every person a quote based on the lifestyle that exact person leads. This is an example of behaviour-based pricing. The legal framework for how far a company may go in segmenting its customers is unclear, and this is where ethics come in: How far should a company or industry go? When does insurance cease being a collective arrangement where 1.000 individuals pay NOK 100 each to make sure the person who needs an operation gets one?
The European Union has initiated a legislative process on artificial intelligence (europarl.europa.eu), which includes an ethical framework for artificial intelligence. The regulatory sandbox will monitor these developments carefully.
The third main principle for responsible artificial intelligence is, as mentioned above, that it needs to be robust. This means that the artificial intelligence must be based on systems with technically robust solutions, to prevent risk and contribute to the systems working as intended. The risk of unintended and unexpected consequences should be minimized. Technical robustness is also important for the accuracy, reliability and verifiability of these systems.
Responsible and trustworthy artificial intelligence is discussed in more detail in Chapter 5 of the National Strategy for Artificial Intelligence (regjeringen.no).
You can also read more about this topic in a downloadable PDF found here: ICDPPC's Declaration on Ethics and Data Protection in Artificial Intelligence (edps.europa.eu) and in the report from the EU expert group, mentioned above.