Logo and page links

Main menu

Doorkeeper, exit report: Intelligent video monitoring with data protection as a primary focus

Doorkeeper, exit report: Intelligent video monitoring with data protection as a primary focus

The enterprise Doorkeeper aims to strengthen data protection in modern video monitoring systems. They want to achieve this by using intelligent video analytics to censor identifying information – such as faces and human shapes – in the video feed. They also want to ensure fewer recordings are saved, compared to more traditional monitoring systems.

Summary

Main points

The purpose of this sandbox project is to explore some of the most relevant regulatory challenges that arise in connection with the use of intelligent video analytics.

  1. Legal basis for intelligent video analytics

    In the sandbox, we have explored legal issues associated with three different areas of application for Doorkeeper’s solution. These issues are transferable to other enterprises that are developing similar solutions. One key point is that intelligent video analytics, which are designed to make the processing of personal data less invasive, may affect the legality of the processing pursuant to data protection legislation.
  2. Alternative solution designs

    Doorkeeper has two alternative designs for the use of intelligent video analytics. The analysis and censoring either takes place inside the camera body, or on an external platform. The choice of design impacts how personal data is processed in the solution. In this report, we discuss how this may have consequences for how invasive the monitoring is for the individuals recorded in the video feed.
  3. Data minimisation

    This report demonstrates that it is possible to implement data minimisation measures in intelligent video analytics. Such measures may be implemented by configuring the solution to limit the amount of personal data collected and processed to what is necessary for the purpose. The presence of functions in monitoring systems that make processing of personal data less invasive will also require enterprises in the security industry to assess, to a greater extent, the types of personal data they need to be processing to achieve the purpose of the monitoring.
  4. Disclosure of personal data

    Doorkeeper’s solution is designed to store less personal data that may be disclosed, compared to cameras that make continuous recordings. This could make it easier for enterprises like Doorkeeper to comply with requests for access, as the enterprise will store fewer recordings that may be relevant. The rules for disclosure of personal data, however, will be the same for Doorkeeper as for all other data controllers.
  5. Security issues

    In the sandbox, we have discussed how Doorkeeper must ensure to maintain security within the solution. This entails that security must remain up-to-date with technological developments, that any vulnerabilities must be addressed and handled immediately, and that data must be deleted once the purpose has been achieved.

Going forward

In the sandbox, Doorkeeper has explored questions they have had in the development of intelligent video analytics, with potential functionality that enhances data protection. Doorkeeper may use the discussion in this report to better comply with requirements of data protection legislation and ensure better data protection within the solution. The Data Protection Authority also hopes the discussion can be useful for other enterprises developing similar technology.

Through these sandbox activities, the Data Protection Authority has also learned a great deal about the possibilities inherent in intelligent video analytics. We will use this new knowledge to further improve our informational work.

What is the sandbox?

In the sandbox, participants and the Norwegian Data Protection Authority jointly explore issues relating to the protection of personal data in order to help ensure the service or product in question complies with the regulations and effectively safeguards individuals’ data privacy.

The Norwegian Data Protection Authority offers guidance in dialogue with the participants. The conclusions drawn from the projects do not constitute binding decisions or prior approval. Participants are at liberty to decide whether to follow the advice they are given.

The sandbox is a useful method for exploring issues where there are few legal precedents, and we hope the conclusions and assessments in this report can be of assistance for others addressing similar issues.

Introduction

If we imagine video monitoring in a Norwegian supermarket in the 1990s, the camera was large and prominent, and was connected to a video recorder in the back rooms of the supermarket with a cable. Technology has improved considerably since then, with better image resolution, software, storage and wireless connection.

At the same time, monitoring solutions have generally become less expensive and more readily available. However, the biggest technological shift came in the 2010s, when artificial intelligence and machine learning could be used to analyse the video feed.

With solutions based on artificial intelligence, anything in the video feed can be analysed automatically. This makes it technically possible to collect vast quantities of data on the persons captured by the camera. At the same time, increased processing power and advances in edge computing have made it possible to run machine learning algorithms in the camera body, which means that data is, to an increasing degree, being analysed before it is ever made available to humans. This changes how personal data is being processed in monitoring systems.

The Data Protection Authority finds that video monitoring is a topic that holds considerable interest for many people. In recent decades, our information service has regularly received enquiries about this topic. Among other things, we receive questions from private individuals who feel they are being watched by video monitoring systems, and from enterprises that have questions about how to implement video monitoring in compliance with the law.

Monitoring of people is an exercise in authority that entails a certain level of infringement of the individual’s right to privacy. As shown in this report, this infringement must be considered in light of the specific purpose in each instance. Solutions that adopt artificial intelligence may be both more and less invasive, compared to older monitoring technology. Many see data protection as a competitive advantage in the market, and the Data Protection Authority considers it positive that more enterprises are moving in this direction.

The presence of video monitoring equipment in itself will generate a feeling of being watched – regardless of whether or not the camera is “intelligent”. We do not discuss the general pervasiveness of video monitoring in this report, since this should be the subject of a wider public debate.

About the project

AS Doorkeeper is a Norwegian start-up, established in 2021. The enterprise develops, manufactures and sells security solutions, such as access control and video monitoring. This sandbox project has focused solely on video monitoring.

What is Doorkeeper’s service?

A primary reason for why Doorkeeper was selected as a participant in the sandbox was that they want to use so-called intelligent video analytics to market more privacy-friendly video monitoring. Intelligent video analytics is technology that automatically analyses content from video monitoring cameras.

To achieve better data protection, Doorkeeper wants to use artificial intelligence and machine learning to censor identifying data in the video feed. This includes faces, bodies, licence plate numbers, text and logos. They also want to configure the solution to refrain from making continuous recordings when no pre-defined events are taking place. There will be some temporary storage taking place in the camera body, but these files will later be automatically deleted and will normally not be made available to the operator.

Intelligent video analytics can be configured in many different ways to customise the monitoring for the defined purpose. In the sandbox, we have limited our discussion to primarily focus on three functions within the solution:

  • Censoring of identifying data from the video feed in real time
  • Registration and notification of fire and smoke
  • Registration and notification of criminal activity

Doorkeeper wants to be a one-stop provider. This means they will provide all parts of the solution: cameras, networks, monitoring platform, configurations, control centre and operators. Among other things, the purpose of this is to ensure that the service is used in a way that protects the privacy and security of the persons being recorded.

How does the solution work?

Below is a description of how Doorkeeper’s solution for intelligent video analytics works. This description serves as a basis for the consideration of legal issues later on in the report.

When this report was prepared, the solution was still in the developmental stage. The description below may therefore differ from the final solution.

A step-by-step explanation of the solution

The solution is comprised of several components: video monitoring cameras, an interface for intelligent video analytics and a monitoring platform (“video management system” - VMS). The monitoring platform collects a video feed from the camera and provides the operator with a user interface. The operator monitoring the video feed does so via this platform.

Doorkeeper has two alternative set-ups for the solution. The most important difference between these alternatives is where the analysis takes place:

  1. One uses Doorkeeper’s proprietary cameras. These cameras have a processing unit built-in, and this processing unit is designed specifically for use with artificial intelligence. This means the camera is able to analyse and process the video feed in real time. As a result, faces and human shapes can be censored locally, within the camera, and this minimises the transfer of personal data from the camera to the platform.
  • A log is stored both locally in the camera body and in the platform. This log specifies when events are detected, when the censoring is removed, and which user has logged on to the system.
  1. The second alternative entails the use of cameras not provided by Doorkeeper. This could be an option for enterprises that already have video monitoring equipment from other providers installed. For these solutions, the intelligent video analytics will take place on the platform. This means the video feed transferred from the camera to the platform will contain an uncensored and “raw” video feed.
  • As Doorkeeper will not be able to control any logging of activity in third-party cameras, the logging will only take place on the platform.

Once the customer has selected an alternative (1 or 2), the monitoring system is configured. We have limited our discussion to the alternatives we have discussed in the sandbox project:

  1. First, the enterprise must configure the censoring function. They can choose to censor:
  • Faces (the solution will identify faces in the video feed, and these will be censored)
  • People (the solution will identify human shapes in the video feed, and these will be censored in their entirety)
  • Licence plate numbers for vehicles (the solution will identify vehicle licence plates, and these will be censored)
  • Text (any moving surfaces with text on them will be identified and censored)
  1. The enterprise must then configure which events the solution should detect. This could include:
  • Fire and smoke
  • When someone crosses a pre-defined “line” (if a person accesses an unwanted location – e.g. a train track)
  • Objects (the presence of potentially dangerous objects – e.g. weapons, propane tanks or petrol cans – in the video feed)

When the interface detects one of the pre-defined events, this will trigger a signal to remove the censoring. The system will then initiate a recording and transfer with permanent storage on the platform. At the same time, a notification will be sent to the operator.

If the operator believes that an event has occurred and the solution has failed to identify it, the censoring can be deactivated manually. In such cases, the operator must specify a reason for removing the censoring, the time period for which to remove it, and their own unique user ID. This applies both to censoring taking place in the camera body and censoring on the platform.

In order for the system to be able to identify events and people in the video feed, the algorithm must be trained. Doorkeeper handles this by feeding the algorithm with images from commercially available databases. Action patterns and analytics are then defined. In the same process, Doorkeeper performs manual adjustments of training parameters, to ensure the algorithm produces more accurate results. Doorkeeper does not train its algorithm with recordings – or other types of data – from its cameras.

For both alternatives, Doorkeeper will cache the last few minutes of video, so that any recordings that are permanently stored will also include the minutes before the event takes place. This cache is deleted continuously, and the operator does not have access to the cache memory. The duration of cache storage will be assessed based on the specific event, but should always be as short as possible. Recordings cannot be deleted manually by the operator, but will be stored for a pre-defined time period.

Doorkeeper aims to communicate clearly about the use and presence of their cameras. Among other things, they plan to have a red front on their camera bodies for easy visibility, as well as signage to inform about the use of “intelligent monitoring”.

Solution security

To ensure communication between the cameras and the platform, Doorkeeper plans to, among other things, encrypt the video feed and use dedicated networks for the transfer. Doorkeeper will also have some control over how the solution is configured, to better ensure it is being used in a privacy-friendly and secure way. We will return to this in the chapter on security issues later in this report.

When installing and using their services, Doorkeeper's customers must comply with a number of requirements. Among other things, the customer must use an installation service with ecom networks authorisation (see the National Communications Authority website). Furthermore, the system must be configured in accordance with Doorkeeper’s specifications.

One obvious weakness of censoring algorithms is that the operator controlling the cameras can arbitrarily turn off or change the censoring level and criteria, or activate the recording function when there has been no event. Doorkeeper has therefore considered limiting customer access and requiring contact with a control centre to make changes.

Which types of data will the solution register?

There is no doubt that Doorkeeper processes personal data, even though identifying data will be censored in the video feed.

In Alternative A, the video feed is censored directly in the camera, before it is forwarded to the platform. This means that a lot of the identifying data will only be processed by the camera in a normal operating situation – i.e. when an event has not been detected. An uncensored recording is also stored temporarily within the camera (cache memory). This is deleted after a predefined interval, e.g. five minutes.

Doorkeeper has implemented measures to ensure that operators do not have direct access to the recording stored in the camera body. Operators will therefore only have access to the censored video feed. When the camera detects a predefined event, the censoring is removed from the video feed and a recording is initiated and sent to the platform.

The recording made after an event has been detected will include the minutes of uncensored recording stored in the camera. This will enable the operator to see what happened in the minutes prior to the event.

In Alternative B, the solution will process the same types of data as in Alternative A. The difference is that the censoring takes place on the platform. This means that an uncensored video feed must be transferred between the camera and the platform. Nevertheless, operator access will be the same as for Alternative A.

When a censored feed is available, it cannot be ruled out that the video feed may still be linked to an individual. One example is when the camera captures a person who is relatively tall, and who passes by the same place at approximately the same time every day. Furthermore, individuals may also be identifiable based on context (where they are at a given time), if data is used in combination with other sources (e.g. other video monitoring without censoring or where other types of data are collected, and this data can be linked to a natural person – e.g. where logs are registered).

 It may also be possible to derive personal data from text or logos shown in an image, e.g. when a company name or logo on a vehicle can be linked directly to a natural person - provided this is not censored.

Goal for the project

The goal of this sandbox project is to explore the solutions of the enterprise Doorkeeper, who wants to use artificial intelligence to limit the types of data captured by video monitoring equipment.

The Data Protection Authority and Doorkeeper set out to discuss the following topics and issues associated with intelligent video analytics:

  1. Purpose limitation and data minimisation

    How should the principle of purpose limitation be interpreted in the context of intelligent video analytics? And how can data minimisation be ensured?
  2. Legal basis for the use of intelligent video analytics

    When intelligent video analytics is used, what are the key considerations an enterprise must make in terms of the legal basis for the processing of personal data?
  3. Alternative solution designs

    Doorkeeper is considering two design alternatives for their solutions, where the video feed is either censored in the camera body or on an external platform (VMS). How do these different designs affect the legal basis for the solution?
  4. Disclosure of personal data

    Which considerations must be made in terms of the disclosure of personal data from the uncensored video feed?
  5. Security issues

    Solution security will be a determining factor in whether Doorkeeper and similar enterprises will be able to comply with legal requirements. But which types of security issues are relevant for the type of technology Doorkeeper wants to use, and what are the overarching legal requirements for security?

How is video monitoring regulated?

In the sandbox project, we have only discussed responsibilities that follow from data protection legislation. There are, however, other laws that may also be relevant for video monitoring.

When an employer wants to install monitoring equipment in the workplace, provisions on control measures in the Working Environment Act (chapter 9) and special provisions in regulations for video monitoring in the workplace also apply.

For police use of video monitoring for police purposes, data protection legislation does not apply (see Article 2 (2) (d) of the Regulation). Section 6a of the Police Act grants the police legal authority for video monitoring in a public place when such monitoring is necessary to perform police duties pursuant to Section 2 (1-4) of the Police Act. The police’s processing of personal data from video recordings for these purposes is regulated by the Police Databases Act and the Police Databases Regulations.

Police Databases Act and the Police Databases Regulations. Purpose limitation and data minimisation

One of the goals of the sandbox was for the Data Protection Authority to assess the significance of Doorkeeper’s solution in the context of the principles of purpose limitation and data minimisation.

Purpose limitation

Before an enterprise can process personal data, it must clearly define the purpose of the processing. Closely related to this is the principle of purpose limitation. Purpose limitation entails that personal data may only be processed for specified, explicit and legitimate purposes. Personal data must not be processed in a manner that is incompatible with these purposes.

When the processing of personal data begins, the purposes shall already be defined. This entails that personal data may not be processed simply because it may prove useful in the future. The purposes must be defined and explained in a sufficiently specified manner. This means that the “data subjects” – i.e. the persons whose data the enterprise is processing – must have a clear and unambiguous understanding of what the personal data will be used for. That the purpose must be legitimate means that – in addition to having a legal basis – it also must comply with other ethical and legal standards.

Data subjects have the right to understandable information about the purpose of the processing of their personal data (in accordance with Article 12, 13, 14, and 15). It is therefore important that the purposes are defined in a specific and transparent manner and documented in writing.

If an enterprise wants to make use of video monitoring, it will not be sufficient to refer to a vague and unspecified reason, such as “security”. The purpose must be defined more specifically and must be tied to a real need, e.g. imminent risk of theft or vandalism. Monitoring for specified security purposes may also not be used for other, incompatible purposes, such as the monitoring of employees.

Data minimisation

Data minimisation is a key principle enterprises must consider for compliance with data protection legislation. This legislation – including the principle of data minimisation (lovdata.no) – requires that the data used must be adequate, relevant, and limited to what is necessary to achieve the purpose for which it is being processed. This means that one cannot process more personal data than what is necessary to achieve the purpose.

Modern video monitoring technology – including Doorkeeper’s solution – has the potential to make the processing of personal data less invasive, and it is possible to prevent the collection of data that is not relevant for the purpose.

Data minimisation in practice: intelligent video analytics of the traffic network

Doorkeeper has considerable control over which types of personal data that is registered in the solution. In line with this, Doorkeeper and the Data Protection Authority have discussed two hypothetical examples that illustrate how the solution may facilitate data minimisation. Both examples involve the registration of vehicles in the traffic network.

The sandbox project has not made any further legal assessments of the examples – such as whether there are more suitable measures than the use of video monitoring.

Example 1:

The purpose of the monitoring is to register the number and type of vehicles in the traffic network using intelligent video analytics. The purpose is to register the number of vehicles with various classifications (passenger cars, passenger cars with trailers, buses, motorcycles) that are using Norwegian roads.

Doorkeeper found that it is theoretically possible to register the number and types of vehicles without making recordings or transferring a video feed out of the camera body. For example, the system can run a code in the camera body that converts local recordings of vehicles to figures that summarise the number and types of vehicles. Although the video feed is being processed in the camera body – which would indicate that personal data is being processed – the processing will potentially be significantly less invasive for the privacy of road users than in solutions based on continuous video recordings or other methods that involve the storage of personal data.

If it is technically possible to achieve the aforementioned purpose with a form of analysis where no personal data is being processed, the data minimisation principle indicates that this alternative must be chosen.

Example 2:

The purpose of the monitoring is to identify fire and smoke in the road network, to send notifications about traffic accidents. In this case, it could be appropriate to have a camera capable of transferring a recording if an event is detected. Recordings of roads would, however, entail that Doorkeeper (or the customer) is processing identifying information, such as faces of drivers or passengers and vehicle registration numbers. In addition, other text on vehicles may constitute personal data, as company names can often be linked to a specific natural person.

In this case, it will be appropriate for the data controller to limit what is collected in terms of identifying data in the video feed. Censoring of both faces and vehicles (including text that may display company names, etc.) would therefore be appropriate to ensure data minimisation.

The necessity of processing and consideration of alternative measures

If, for example, the data controller wants to prevent crimes against their property, the data controller may, instead of installing a video monitoring system, consider implementing alternative security measures, such as a fence around the property, regular security inspections, security guards, ensuring better lighting, installing security locks, making windows or doors burglar proof, or installing anti-graffiti coatings on the wall. In each case, the data controller must consider whether alternative measures could be less invasive on the privacy of individuals.

Before the data controller adopts the use of a video monitoring system, the controller must assess where and when video monitoring is necessary.

Who is responsible for ensuring compliance with these principles?

Pursuant to the General Data Protection Regulation (GDPR), the data controller is responsible for compliance with the requirements for processing of personal data – including the principles of purpose limitation and data minimisation.

The data controller determines the purpose of the processing and which means – such as technical solutions – to use to achieve this purpose. Who the data controller is will be determined based on the actual circumstances. In other words, the party making the decision of whether or not processing will take place and defining the purposes of the processing and how the processing will be handled, will be considered the data controller.

As a general rule, the data controller will be the customer purchasing a free-standing camera, or a security/monitoring system where video monitoring is included. In cases where the provider is processing personal data on the customer’s behalf, the provider will be a data processor.

Depending on which services a camera provider is offering, the provider may be the data controller for part of the processing when a video monitoring system is used. Sometimes, for example, the provider will process some types of personal data to adjust the algorithm after an event on the customer’s premises, to ensure that the algorithm functions as intended for other customers. If two or more data controllers jointly determine the purposes and means of the processing of personal data, they will be considered joint controllers (Article 26). This sandbox project has not considered whether Doorkeeper is a data controller, joint controller or a data processor pursuant to the GDPR.

A provider of a video monitoring system which, after a specific assessment, is not deemed to be the data controller, will in principle not have any direct responsibility for upholding the data minimisation principle. Nevertheless, it is important that the video monitoring system delivered makes it possible for the data controller to comply with the regulations in practice. In the opposite event, the provider's customers will not legally be able to use the system for the processing of personal data. It is therefore important that Doorkeeper takes a conscious approach to data minimisation in the development of its service, regardless of whether or not the company is deemed to be the data controller.

The same will apply for the rule relating to privacy by design as established by Article 25 of the GDPR. This provision specifies that the data controller must implement appropriate technical and organisational measures for ensuring effective implementation of data protection principles and protection of the rights of data subjects. Furthermore, measures shall be implemented to ensure that, by default, only data necessary for each specific purpose of the processing is processed. Similar to the data minimisation requirement, the requirement of privacy by design is a duty of the data controller, and it will be relevant for Doorkeeper to consider this.

Because the data controller must, in any event, ensure privacy by design in any solution they use, it would be easier to choose a solution that already has this built-in from the start, compared to adding data protection to a solution that does not have it. Customisation of off-the-shelf solutions after purchase can be expensive. Solutions that have privacy by design could therefore, in many cases, have a competitive advantage over solutions that do not.

Legal basis for Doorkeeper’s use of intelligent video analytics

One of the goals of the sandbox project was to consider how the legal basis for the use of intelligent video analytics should be assessed, based on Doorkeeper’s solution.

The processing of personal data is only lawful if the processing has a legal basis (see Article 6 (1) (a-f) of the GDPR).

Which legal bases are relevant for the use of intelligent video analytics?

In some situations, consent (Article 6 (1) (a)) could constitute a legal basis for video monitoring. There are, however, stringent requirements for what constitutes valid consent pursuant to data protection legislation, and consent is therefore not suitable for many situations. Examples of consent-based processing include research, treatment in patient rooms, in care institutions, etc.

Another potential legal basis for the use of video monitoring is compliance with a legal obligation. This will, however, primarily be relevant for public authorities. For public authorities, Article 6 (1) (c) and (e) will be the most relevant legal basis for video monitoring. These provisions allow processing of personal data if the processing is necessary for compliance with a legal obligation, or for the performance of a task carried out in the public interest or in the exercise of official authority. Furthermore, public authorities cannot use “legitimate interests” (see Article 6 (1) (f)) in the exercise of official authority.

For private parties wanting to adopt the use of video monitoring, consent or compliance with a legal obligation are rarely relevant options as a legal basis.

More on “legitimate interest” as a legal basis

When private parties are considering video monitoring, their most relevant option for a legal basis will – in most cases – be “legitimate interest (see Article 6 (1) (f)). This provision allows processing of personal data where such processing is necessary for the purposes of the legitimate interests pursued by the controller or a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data.

In order to base processing on this legal basis, one must first weigh up the interests of the data controller in relation to the interests of the data subject. The data controller’s legitimate interests must be lawful, clearly defined, real and justified. The processing must furthermore be necessary for the pursuance of the legitimate interest. This entails that video monitoring must be a suitable measure for achieving the purpose, and that the purpose cannot reasonably be effectively achieved by less invasive means.

When making this assessment, the interests, rights and freedoms of the data subject – including their reasonable expectations in the context of their relationship with the data controller – must be weighed up against the data controller’s legitimate interests. In the context of video monitoring, the interests of the data subject may involve the right to a private life and privacy, and may be relevant in considering how invasive the monitoring is.

If the data controller wants the processing of personal data serve multiple purposes, the controller must consider the legal basis for each individual purpose. Whether or not Article 6 (1) (f) can be used as a legal basis for the processing of personal data will depend on a specific assessment of each type of processing.

As a rule, personal data may only be further processed for new purposes if the new purpose is compatible with the purposes behind the original processing, see Article 6 (4). This is only relevant for purposes added after the time of collection. All purposes relevant at the time of collection must be considered in the context of Article 6 (1) in the normal manner.

Technology as a factor in assessment

How does the AI technology developed by Doorkeeper affect the balance between a legitimate interest for monitoring and the infringement of privacy the monitoring entails? If we exclusively consider how personal data is used in the solution, Doorkeeper may implement measures that make the collection of data less invasive.

  • In a normal situation, the personal data is only processed by Doorkeeper’s algorithm for censoring the video feed. The data is censored in real time and only limited recordings, which are deleted after a predefined time, are made – unless a predefined event has occurred.
  • While personal data is being processed by the solution, nobody has access to the uncensored video feed until the system identifies a predefined event, or if the operator deactivates the censoring manually.
  • Operators must specify the reason, time frame and user ID if they want to manually disable the censoring. The event is logged in the system. This barrier will likely help prevent and detect snooping, provided there is responsible and regular review of the logs.

These are all relevant factors when an enterprise is considering whether their legitimate interest constitutes legal basis, and could help tip the scale in favour of the legitimate interest.

Doorkeeper’s solution could be designed to have the censoring of identifying data either taking place in the camera body or on a platform. The data controller must consider the impact of the differences in the different solutions on how invasive the processing of the personal data will be. The platform solution entails processing of personal data on a larger scale than the solution where the censoring takes place in the camera body, as the data must be transferred from the camera to the platform before being censored. Among other things, the data controller should take into account whether a solution where the censoring takes place in the camera body may in some cases be considered less invasive than platform-based censoring.

The experience of being watched as a factor of consideration

In considering how invasive the infringement on privacy is for the data subject, one must base the assessment on the actual processing of personal data. In addition, the experience of being watched is also relevant. This experience constitutes the basis for the rule concerning false video surveillance equipment in Section 31 of the Personal Data Act. The preparatory works to this provision state that “false monitoring equipment may be perceived as a significant invasion of privacy, even though no real processing of personal data is taking place, and … the feeling of being watched is in itself a violation of integrity and thus may lead to changes in behaviour”.

The data subject may therefore perceive that they are watched, regardless of how invasive the processing of personal data actually is. The technology used does not necessarily impact this experience, because in most cases, the data subject will not have sufficient knowledge of how the technology works. This is a factor that must be included in a consideration of the data subject’s interests. In this situation, however, transparency and good information may play a role.

A key aspect when considering how invasive monitoring is perceived to be, is the data subject’s expectations for the data controller. The context of the processing will be important in this regard. There are situations where the data subject has greater expectations of privacy, such as in gyms, spas and swimming pools. In other situations, such as on public transportation, or in a shop, most people will likely have lower expectations of privacy.

In some cases, the presence of video monitoring in an area may give the data subject a sense of safety. A camera may, in some cases, have a preventative effect on crime and may also lead to people feeling safer. These are also relevant aspects in a consideration of the data subject’s interests.

Assessment of three examples of use

In the sandbox project, we have explored how Doorkeeper’s technology may affect the assessment of the legal basis for processing in connection with video monitoring. To highlight some of the relevant issues, this chapter takes a closer look at three examples of Doorkeeper’s technology in use.

These examples were chosen because they represent potential areas of application for Doorkeeper’s solution. At the same time, they raise issues that are transferable to other enterprises that may be developing similar solutions.

The examples we have chosen to explore are:

  1. Video monitoring of the exterior of a commercial building for the purpose of preventing theft
  2. Video monitoring of a street for the purpose of protecting the enterprise’s reputation
  3. Use of video monitoring to detect fire and smoke on the exterior walls of a stave church

The first example is a situation where, in many cases, there will currently be a legal basis for video monitoring. In this example, we therefore discuss the role Doorkeeper’s technology plays in the assessment of legal basis in these situations.

The second example is a situation where, as a rule, it is currently not lawful for a private party to use video monitoring. In this example, we discuss whether Doorkeeper’s technology could make it possible to use video monitoring to a greater extent than what is currently permitted.

In the third example, we describe a situation where the use of video monitoring without censoring and continuous recording would not meet the requirement for necessity in an assessment of legal basis. We then discuss whether use of a type of video monitoring – such as the system offered by Doorkeeper – could still meet this requirement.

What all examples have in common is that recordings are only stored temporarily for a short period of time, unless a predefined event is detected.

In the following, we assess Doorkeeper’s technology in the context of “legitimate interest” as a legal basis for the monitoring, and the impact of the technology on considerations of necessity in the context of legal basis (see Article 6 of the GDPR). The examples also all involve setting up video monitoring equipment in places frequented by the general public. Several aspects of the consideration will be the same for all examples.

A general issue is the role Doorkeeper’s technology plays in the outcome of the assessment of legal basis, and whether this differs from video monitoring without the equivalent censoring function and with continuous recording.

The examples are not an exhaustive review of all aspects that would be relevant in a consideration of legal basis, but they focus on aspects we believe are especially relevant in light of Doorkeeper’s technology.

For general information about the legality of video monitoring, see the Data Protection Authority’s guide here [/personvern-pa-ulike-omrader/overvaking-og-sporing/kameraovervaking/].

Example 1: Video monitoring of the exterior of a commercial building for the purpose of preventing theft

The first example involves video monitoring of the exterior of a commercial building on a pedestrian-only street. In this example, the data controller is a private enterprise, which owns the building.

The purpose of the monitoring is to prevent and give notice of theft. The solution will handle this by registering and sending a notification if somebody breaks the shop window and enters the premises. When the service provider installs the camera, they angle it toward the front of the building. The solution is configured so that human shapes are censored in the video feed.

In a normal situation, the camera does not make recordings in the camera (with the exception of time-limited storage in cache memory) or on the platform. Recordings will only be activated if the solution detects the shop window being broken.

Assessment

In many cases, video monitoring will be permitted even when continuous recordings are made and human shapes are not censored from the video feed. The question in this example is therefore whether Doorkeeper’s solution impacts the assessment of the legal basis, and whether the choice between censoring in the camera or on the platform is relevant in this assessment.

Prevention of crimes targeting the enterprise may be a legitimate interest, and the prevention and notification of break-ins and theft may be legitimate interests the data controller can pursue. This assessment, however, will be contingent on the risk of such events being real – if not, the interest of protecting against such events will also not be real.

Video monitoring must be limited to what is necessary in terms of time and space. If the risk of break-ins is only real after the store is closed, no video monitoring can take place during the shop’s opening hours. As a main rule, only public authorities may monitor public spaces, but it may be permissible for private parties to record an insignificant part of a public street if necessary, such as right outside the front of the shop.

Prior to installing video monitoring equipment, the data controller must consider alternative measures. Alternative measures that may be appropriate for achieving the purpose include physical protection, security guards and other types of alarm systems that do not entail the processing of personal data. If the alternative measures are not suited, the data controller must also consider which type of camera technology is necessary to achieve the purpose. In this example, it will only be necessary to capture humans when a breach of the premises has been detected.

Since the camera will capture and give a notification if the premises have been breached, it may be suited to achieving the purpose. This indicates that the use of Doorkeeper’s solution may meet the requirement of necessity. The solution also makes it easier to only process personal data to the extent that is necessary for the purpose, as human shapes are censored immediately, unless they are captured when an alarm has been triggered.

When the camera, which is aimed at the front, is located outside the shop, on a pedestrian-only street, it may potentially capture many people, including passers-by. This can, to some degree, be limited by angling the camera and/or censoring areas where no monitoring is needed. Even so, the scope of data subjects is, in all instances, unpredictable, and may change character depending on the time of day, day of the week and whether there are other activities or events going on nearby.

In a pedestrian-only street, the data subjects will have some expectation of being captured by video monitoring inside the shop during opening hours, but also, to some degree, in the immediate vicinity outside at other times of the day. This could indicate that the use of video monitoring equipment installed outside, and facing towards the shop, in a pedestrian-only street, would not normally be considered particularly invasive.

Doorkeeper’s solution will also, in practice, be less invasive for the persons captured by the camera, compared to a camera that continuously captures recordings that do not censor human shapes. For most people captured by the video monitoring, the processing of personal data will be very limited. However, in this example, Doorkeeper’s data-minimising solution may not have a significant impact on the data subjects’ experience of being watched and their perception of how invasive the monitoring is. Monitoring in a pedestrian-only street makes it less likely that passers-by and customers are aware of the type of camera technology being used, even if signs are posted with information on how the solution works. There will thus not necessarily be any correlation between the data subjects’ experience and the actual processing of personal data.

Summary

In this example, it will be appropriate to configure the camera to ensure that the measure better complies with the data minimisation requirements. The requirement of necessity imposes a duty on the data controller to select the least invasive technology suited to achieving the purpose of the monitoring. When more privacy-friendly technology becomes available, it also changes the perception of which processing of personal data is necessary for achieving the purpose. The technology, however, does not necessarily impact the data subjects’ experience of being watched, and the invasion of privacy must therefore be considered on an equal footing with other video monitoring solutions in terms of the experience of being watched.

Example 2: Video monitoring of street for the purpose of protecting the enterprise’s reputation

As an extension of the previous example, the enterprise also wants to expand the monitoring further out into the pedestrian street. The purpose of this is to prevent, discover and notify of unwanted events that are not directly targeted at the enterprise, but that may affect the enterprise’s reputation and financial interests. This could include activities that make the area where the enterprise is located feel less safe – and therefore less attractive to potential customers.

In this example, the purpose can be achieved by having the camera technology recognise objects like weapons (such as large knives), in addition to the general preventative effect of having visible video monitoring in the area.

Assessment

The question here is whether Doorkeeper’s solution, which includes object recognition and censoring of human shapes, would entail that the enterprise has a legal basis for monitoring further out into the street, for purposes that are more general in nature.

In isolation, keeping the area around the enterprise safe could constitute a legitimate interest if the risk of unwanted events is real and provided the enterprise is directly or indirectly affected. The prevention of unwanted events or crime in public spaces, however, is a responsibility vested in public authorities, primarily the police. This could indicate that keeping order in a public place is not a legitimate interest that may be pursued by a private enterprise, even if the enterprise is negatively affected by crime or other unwanted events in the area.

The main rule is also that only public authorities may monitor public spaces. The police have the authority to use video monitoring for police purposes, including for preventing and stopping criminal activity. Doorkeeper’s solution cannot have an impact on the assessment of whether the enterprise has a legitimate interest or a legitimate purpose for the monitoring.

In this example, the assessment is that video monitoring may be necessary to achieve the purpose, but that the monitoring nevertheless will not be permitted as long as the purpose of it does not constitute a legitimate interest for the enterprise to pursue.

Data subjects may have an expectation for the street to be monitored, but they would expect it to be monitored by public authorities, not by businesses located there. This could indicate that the rights and interests of the data subjects take precedence over the enterprise’s interests in monitoring.

Summary

Choosing a data-minimising technology would not lower the threshold of what is considered legal video monitoring in cases where the enterprise does not have a legitimate interest to pursue monitoring.

Example 3: Use of video monitoring to detect fire and smoke on the exterior walls of a stave church

In the final example, we consider the use of video monitoring of the exterior wall of a stave church for the purpose of triggering an alarm in case of fire.

In this example, the purpose of the video monitoring is to detect fire and smoke in order to quickly extinguish a potential fire and prevent damage to the building. The camera will function as a sensor that can detect flame or smoke patterns, and it will be configured to censor human shapes in the video feed. The camera solution can be configured to ensure that the censoring cannot be removed and the video feed is not cached.

While the purpose in this example is not to capture persons on the video feed, the monitoring will still entail processing of personal data which requires a legal basis pursuant to the GDPR. That is because people who move around near the church may be captured by the camera.

For this example, it would be useful to consider various legal bases. Whether the data controller is a public body or a private enterprise will be relevant in this consideration. Regardless of which legal basis the processing is based on, the processing must be “necessary”, see the data minimisation principle. In the following, we discuss the necessity requirement and thereafter provide some comments on the balance of interests (in accordance with Article 6 (1) (f)). We assume that extinguishing fire and preventing damage to a stave church is a legitimate interest in accordance with Article 6 (1) (f).

Assessment

In this example it is especially interesting to assess whether Doorkeeper’s solution can be considered “necessary” in a case where a video monitoring solution without censoring and with continuous recordings will not meet the requirement for necessity.

A solution without censoring and with storage of recordings will, in most cases, not meet the necessity requirement, because the purpose can reasonably be expected to be achieved effectively by other and less invasive means. As described in the examples below, Doorkeeper’s technology will process personal data to a much lesser extent than a solution without the same censoring functionality that stores continuous recordings. The processing will therefore be less invasive for the data subjects.

A camera without alarm functionality will also not be as well suited for the purpose of the processing. If such a camera is to be used to detect smoke or fire, it will require one or more persons continuously monitoring the video feed to identify potential smoke developments. This will likely not be very practical, and it would be highly invasive for the data subjects.

For the purpose of detecting fire or smoke, a smoke detector or other sensor would be obvious alternatives. A fire and smoke-detecting system that involves the use of video monitoring will be more invasive for the data subjects than the use of smoke detectors or other sensors. That is because the video monitoring solution is processing personal data. However, if Doorkeeper’s solution is still better suited to achieving the purpose than less invasive alternatives, the necessity requirement may be met. A key question therefore is whether Doorkeeper’s solution is better suited to detecting fire and smoke than a sensor-based fire detection system.

According to Doorkeeper, it is difficult to direct fire and smoke into a sensor if the fire is on the exterior wall. Smoke detection is often triggered too late, because the smoke does not accumulate until the fire is well under way or has burned out. Heat-detecting cables cannot be installed on heritage listed buildings, and this is also not an effective method for detecting fire. According to tests commissioned by Doorkeeper, it will take 8 minutes to trigger a cable-based sensor, whereas it will take six seconds for Doorkeeper’s camera solution. This could make a huge difference in saving the building. The Data Protection Authority has used Doorkeeper’s description of this functionality for our assessment.

In this example, we found that Doorkeeper’s camera-based solution could meet the necessity requirement, because the solution could provide significantly more effective fire detection than the described alternatives, such as smoke detectors or similar sensors. Therefore, use of the type of camera technology described by Doorkeeper could meet the necessity requirement in a situation where a video monitoring solution with continuous recording and without censoring likely would not.

In terms of the balancing of interests, protecting a stave church from fire could be considered a very important interest. At the same time, it could be seen as invasive to be subject to video monitoring when visiting a church – a place where the data subjects would not expect this to the same degree as when visiting, for example, a shop. Nevertheless, the data subjects may also have an expectation of a stave church being protected from fire and other damage, and that this could entail video monitoring. The data minimisation in Doorkeeper’s solution entails that the actual processing of personal data will be very limited, and this could make it easier to conclude that installation would be appropriate. In this example, therefore, the data minimisation measures of Doorkeeper’s solution could tip the scales in favour of video monitoring.

Summary

Use of the type of camera technology described by Doorkeeper could likely meet the necessity requirement in a situation where it is used to detect fire on the exterior wall of a stave church. This presupposes that the solution is better suited to achieving the purpose than other, less invasive solutions, such as a smoke detector.

The data minimisation measures in Doorkeeper’s video monitoring solution mean the processing of personal data will be very limited.

In this example, the balance of interest may come out in favour of video monitoring.

How will the choice of design influence the assessment of the legal basis?

In all examples of use, the legal assessment could be influenced by the design of Doorkeeper’s solution. Their solution may be designed in two different ways: by having the censoring of human shapes and other identifying data taking place either in the camera body or on a platform. The data controller must consider the impact of the differences in the different solutions on how invasive the processing of the personal data will be for the data subjects in the specific case.

While the two alternatives can be set up with the same level of security, the vulnerability will be higher for the solution where the censoring takes place on the platform, compared to the solution where censoring takes place in the camera body. The platform solution will entail processing of personal data in more steps than the solution where the censoring takes place in the camera body. That is because when the censoring occurs on the platform, data will be transferred from the camera to the platform before being censored. A solution with censoring in the camera body – which would involve fewer steps – will therefore, in some cases, be considered less invasive than the solution where the censoring takes place on the platform. Among other things, this is because more people will have access to the platform.

Furthermore, one might imagine that if the data subjects are aware of the solutions, they would perceive a solution where the video feed is censored in the camera body as being less invasive, compared to a design where the video feed is transferred to a platform before the censoring takes place. This, however, requires the data subjects to be well-informed, and will likely be most relevant in situations where the camera is installed in an area where the data controller has some control over making sure sufficient information is provided to the persons entering the area.

The design of the solution may therefore have an impact on the assessment of the legal basis. The risk of the personal data being processed in a manner that was not intended, will be another factor in the overall assessment of which of the two solutions to implement. In each specific case, the data controller must assess whether the difference between the designs would indicate that one alternative is preferable over the other.

Special categories of personal data

One particular issue associated with the use of video monitoring is that it could be difficult to know in advance which types of personal data one will be processing. In all examples of use, there is a risk that special categories of personal data may be processed. The term special categories of personal data includes information apt to reveal racial origin or ethnicity, political opinions, religious or philosophical beliefs, labour union affiliation, as well as treatment of the genetic data, biometric data that lead to the certain identification of natural persons; information pertaining to health, sexual lifestyle and sexual orientation (see Article 9 of the GDPR).

Before a data controller installs a video camera, it is important to consider whether the processing may entail processing of these types of personal data. This is particularly relevant for the example where a camera is installed outside a stave church, where religious services are being held. In this example, it would be relevant to consider whether the processing will include information about the data subjects’ religious affiliation. In line with the data minimisation principle, the data controller should explore whether the camera could be angled to prevent, insofar as possible, the processing of personal data.

The processing of special categories of personal data is normally prohibited. If it is found that special categories of personal data are included in the processing, the processing must meet one of the exception criteria listed in Article 9 (2) for the processing to be lawful.

In its guidelines from 2020, the European Data Protection Board (EDPB Guidelines 3/2019, p. 17) specifies that while video monitoring is suited to collecting vast quantities of data, this will not necessarily entail the processing of special categories of personal data. If video recordings are processed for the purpose of detecting special categories of personal data, Article 9 will apply.

In August 2022, the European Court of Justice (ECJ) (C-184/20) issued a ruling that applied a considerably wider interpretation of what is considered special categories of personal data pursuant to Article 9. In this case, the ECJ concluded that information about a natural person’s sexual orientation could indirectly be deduced based on information about the spouse’s name, and that this was included in the term “special categories of personal data” in Article 9.

Among other things, the ECJ noted that it must be determined whether data which “...by means of an intellectual operation involving comparison or deduction” could reveal this type of information (see paragraphs 120 and 123). The ECJ also noted that the objectives behind the GDPR support a broad interpretation of the term “special categories of personal data”, in that it “ensure[s] a high level of protection of the fundamental rights and freedoms of natural persons, in particular of their private life” (see paragraphs 125 and 127). At the same time, the ECJ appears to have placed a strong emphasis on the context of the case in question.

With regard to this, the ECJ found that the term “special categories” in Article 9 is rather broad. At the same time, the ECJ did not provide any clear guidance on how to carry out the specific consideration or which outcomes the consideration may have in other types of situations where personal data is being processed. It is uncertain where the line is drawn for the types of processing that should be considered to indirectly reveal special categories of personal data and thus trigger the application of Article 9. Nevertheless, it is relevant to take this ruling into account when a data controller is considering the use of video monitoring. The Data Protection Authority recommends that data controllers monitor legal developments in this regard.

Disclosure of personal data

One of the goals of the sandbox was to assess whether use of Doorkeeper’s technology would have any impact on the assessment of whether an enterprise has a legal basis for the disclosure of video recordings.

Enterprises with video monitoring systems may be asked to disclose recordings, for example, by the police or insurance companies.

The personal data involved in such situations will primarily be related to recordings that have been stored as a result of activation following a registered event.

In purely practical terms, the data minimisation measures in the solution may make such disclosure easier, due to the fact that there is less personal data involved. Even so, for the personal data processed by the data controller, the rules for disclosure will be the same as in any other situation. The legal considerations for disclosure of personal data in connection with use of Doorkeeper’s technology will therefore be the same as for the use of any other camera technology.

Disclosure of personal data is a type of processing of personal data. In order to disclose personal data, one therefore requires a legal basis pursuant to Article 6 of the GDPR. The most relevant option is often to obtain the consent of the person(s) on the recording, but there may also be other legal bases for the disclosure.

Personal data may be reused for new purposes if the new purpose is compatible with the original purpose for which the personal data was collected. There must also be a legal basis for the new processing, see Article 6 (4) of the GDPR.

·        Disclosure of personal data to other enterprises

In the sandbox, we discussed whether it would be possible for the data controller to disclose recordings to an insurance company. The Data Protection Authority finds that this issue follows the general provisions in place for all disclosure of personal data. That means an enterprise must have a legal basis for disclosing personal data.

·        Disclosure of personal data to the police

If the data controller receives a request from the police for disclosure of recordings of data subjects, they must consider whether they have a legal basis for disclosing this personal data. If the police have an order for compulsory disclosure, the data controller will normally be obligated to disclose the data. In other cases, the data controller should have received sufficient information from the police for being able to consider whether there may be other legal bases for disclosure.

·        The data subject’s right of access

The data subject – who has been captured by the camera – has a right of access to their own personal data. This follows from Article 15 of the GDPR. When disclosing recordings to the data subject, the data controller must consider whether granting access has a negative effect on the rights and freedoms of others. In such situations, the censoring of third parties may be a measure that could mitigate such negative effects if other persons are visible in the video feed.

Security issues

In this chapter, we discuss some security issues relevant to the type of technology Doorkeeper wants to use, and offer some general comments on the legal requirements for security.

Security is a vast topic, and in many cases, it is a prerequisite for good data protection. It is difficult to ensure data protection without satisfactory security. In this report, we are unable to cover the topic of security in any great detail. We have therefore only included what was discussed in the sandbox sessions with Doorkeeper.

Legal requirements for data protection

Article 32 of the GDPR regulates requirements for security of processing. Both Doorkeeper and their customers have a duty to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk”. Which measures are necessary for compliance with the legislation will therefore vary in line with the level of risk the enterprise is facing and the general level of risk in society. As a provider and developer of the solution, Doorkeeper should ensure they are offering a very secure solution.

Doorkeeper must furthermore ensure that the security of the solution is enduring, both in terms of the solution’s security remaining up-to-date with technological developments, and in terms of continuously addressing any vulnerabilities. A good way to ensure enduring security is to be ahead of developments – always remaining abreast of what is considered best practice and continuously implementing best practices in the products one is developing.

The law imposes both technical and organisational measures. “Organisational” must be interpreted in the broad sense, and may also include physical measures. Therefore, to protect the confidentiality of persons subjected to lawful video monitoring, a wide range of measures may be needed. Examples of such measures include:

  • protecting the room where monitoring is taking place, to prevent unauthorised persons from accessing the room or being able to see the screens from the outside
  • develop procedures for use of the system
  • ensure adequate training in how to manage events
  • ensure comprehensive training of users
  • require users to sign a declaration of confidentiality
  • define the access of user accounts
  • develop procedures for regular reviews of logs

Protecting communication between the camera and the platform

One topic of discussion in the sandbox sessions was protecting the communication between the camera and the platform in Doorkeeper’s solution. Will the two alternatives for configuration of the solution impact which security measures Doorkeeper must implement for GDPR compliance?

If the video feed from the camera does not include data relating to identifiable individuals, the GDPR will only apply to the processing taking place inside the camera (and not to the communication from the camera to the platform). However, this presupposes that the camera only sends out entirely anonymous data at all times, and this will likely not be the case for Doorkeeper’s solution.

The threshold for classifying data as anonymous is very high. Even if human bodies are completely censored, it cannot be ruled out that some censored shapes could be linked to an individual. For example, a relatively tall person passing by the same location at approximately the same time every day.

As it will likely be impossible to guarantee full anonymity from censoring, and that, even for alternative A, the camera must at times be able to transfer uncensored video, the communication between the camera and the platform should, as a starting point, be protected as if personal data is always being transferred. However, this does not appear to be a major challenge, because most modern cameras have some form of communication protection built-in, such as in the form of transport layer security.

In solutions where no encryption is used, the data controller – and any data processors – must compensate with other measures to ensure that no unauthorised persons can access or change the video feed.

It logically also follows that solutions with analytics and censoring functionality in the camera body (alternative A) will likely entail a lower risk than solutions where the same functions are provided on a platform (alternative B). This is because the video feed in alternative A is not transferred via a network before being processed, unless an event is detected and the censoring is removed. Unauthorised persons that gain access to the video feed will therefore usually only gain access to a censored video. By reducing the quantity of data transferred, vulnerability is also reduced, because an attacker will only gain access to uncensored video by gaining access to the video feed through the software in the camera itself. However, a lower risk associated with alternative A does not necessarily mean that alternative B is associated with unacceptable risk. The risk must be assessed in light of any other security measures that may be implemented. The same will apply to solutions without any form of video analytics or censoring.

It will be up to the data controllers and data processors to assess whether the built-in security measures in the cameras are sufficient. When the sandbox project started, Doorkeeper had already concluded that their solutions would use dedicated networks that were separate from those of their customers. The purpose of this was to avoid potential issues and risks associated with using networks that Doorkeeper was unable to control. The discussions we had in the sandbox led to Doorkeeper making additional changes to their solutions, which involves them wanting to use a VPN solution (Virtual Private Network) provided by a third party to further protect communication between the cameras and the platform. This generates end-to-end encryption between the camera and the platform, thus protecting communication even better than with ordinary transport layer security. This could increase security in both alternatives of Doorkeeper’s solution.

Regular product updates

In discussions in the sandbox project, Doorkeeper explained that cameras with known vulnerabilities that are not repaired is a problem in the security industry. Some enterprises continue to use these types of cameras without installing updates, despite the fact that the manufacturer has made such updates available.

If measures are not implemented to eliminate vulnerabilities, there will be greater risk and this could constitute a violation of the GDPR. It is also worth noting that the fewer links in the supply chain, the fewer entities one has to deal with in order to keep the products one uses updated.

A vulnerability in a VPN solution could result in unauthorised parties gaining access to an enterprise’s internal network. Similarly, a vulnerability in a camera may lead to unauthorised parties gaining access to the video feed. When attackers learn of vulnerabilities, they will generally try to exploit them as soon as possible. Manufacturers must therefore make updates available as soon as possible after learning about vulnerabilities, so that the level of security can be maintained. There could, however, be several reasons why a manufacturer does not provide updates for a specific product. For example, the manufacturer may have removed support for that product, the manufacturer may not be able to provide product support, or the manufacturer may have gone bankrupt. If it is not possible to obtain updates from the manufacturer, the data controller or data processor must assess whether the vulnerability can be mitigated by other means, or if the equipment must be replaced.

It is just as important to have mechanisms in place to ensure one stays informed about vulnerabilities in products one uses, as it is to actually update the products. If one is not aware of the vulnerability, one will not have the ability to do anything about it.

Access control

Access control to both the cameras and the platform in a video monitoring system is necessary to ensure that the video feed is only accessible to those authorised to view it, to prevent snooping or to prevent data from the monitoring being used for purposes other than what was originally intended. For Doorkeeper and its customers, this will, among other things, entail an assessment of who shall have access to the system and how extensive their access should be. This could, for example, entail a user’s access to

  • monitor the video feed,
  • manually remove censoring,
  • review stored recordings,
  • manually delete recordings initiated in error, or
  • update cameras and other parts of the system

An operator whose job it is to monitor the video feed must necessarily have access to view the video feed, but they do not likely need access to install system updates. The CEO of the enterprise will likely not need access to anything other than the recordings stored after an actual event. The system administrator will need access to update cameras and other products.

The system should thus be designed to allow different access levels for each user or user group.

Doorkeeper has assumed that they will configure the solutions for their customers, to minimise the risk of the solution being used in ways that do not align with Doorkeeper’s intentions. Discussions on this topic also led to Doorkeeper considering setting up its own control centre, with its own operators, to ensure better control over how their monitoring systems are being used. Doorkeeper maintains that it is very important to them that the solutions they offer are used in a lawful and ethical manner.

What if the technology does not function as intended?

One applicable risk in the solution that Doorkeeper is developing is that the algorithm may trigger false positives or false negatives. If this occurs, the solution will remove censoring and initiate permanent storage of recordings without the occurrence of an event, or the solution will ignore an event it is supposed to detect.

False positives could entail that the processing of personal data is more extensive than what the data controller can lawfully process. On the other hand, false negatives could entail that the monitoring does not serve the purpose it is intended/trained to serve. Anyone who uses artificial intelligence must monitor any false positives and negatives, and continuously adjust the solution to make sure it functions as intended.

The legal considerations in this report are based on the description of the solution, as presented to us by Doorkeeper. The Data Protection Authority has not reviewed or tested the solution to see how the solution actually functions. If a solution is not functioning as intended, this could have major consequences for the legality of the monitoring. For example, if a solution has errors or defects that mean it collects more data than anticipated, this could constitute a violation of the data minimisation principle in Article 5.

Comprehensive security

Package suppliers – as opposed to suppliers of individual components – will often be in a better position to protect the solutions they offer. That is because they are better able to exercise control over the products that are included with the service.

Doorkeeper claims that they primarily want to be a service provider. This means that they want to be able to exercise considerable control over the cameras, networks and platform. If they opt to establish a control centre, they will also have control over this and the operators. Increased control could lead to a higher level of security, but it would also entail additional responsibility for Doorkeeper – a responsibility it is important that they are aware of.

By controlling the video feed from the time it is generated inside the camera until the operator can see it on a monitor, Doorkeeper has more control over communication security in all components. Doorkeeper will also have more control over whether the solution is configured in a secure and privacy-friendly manner, and can reduce the likelihood of the configuration being changed to one that is less secure. By establishing a dedicated control centre and hiring their own operators, Doorkeeper can ensure that the control centre is set up in a secure manner, and they can ensure that operators are trained according to their standards.

If Doorkeeper achieves its goal of becoming a service provider, it will be important for them to be aware of potential challenges that service providers face. For example, they will have direct control over more components than if they were simply a provider of camera equipment. This means they are responsible for making sure a large number of components operate securely. This challenge increases with the number of new customers, or if different customers need different configurations. It will be important for Doorkeeper to be aware of these challenges, and for them to make sure they have a control system in place that is capable of maintaining security for the entire service.

Going forward

Doorkeeper may use the discussion in this report to better comply with requirements of data protection legislation and ensure better data protection within the solution. The Data Protection Authority also hopes the discussion can be useful for other enterprises developing similar technology.

Through these sandbox activities, the Data Protection Authority has also learned a great deal about the possibilities inherent in intelligent video analytics. We will use this new knowledge to further improve our informational work.

A more privacy-friendly form of monitoring?

In this report, we show that it is possible to implement data minimisation measures in intelligent video analytics. Such measures can be implemented by configuring the solution to limit the quantity of personal data that is collected and processed to what is necessary for achieving the purpose. In purely practical terms, this can be achieved by performing real-time analysis of the data – without storing permanent recordings – and by continuously removing identifiable data from the video feed.

In this report, the Data Protection Authority has also discussed that Doorkeeper’s video monitoring solution could – in some situations – potentially mean an expansion of the types of situations where video monitoring may be used. This will primarily be relevant in situations where cameras are used as sensors, where all human shapes are censored at all stages, and where no recordings at all are made.

The discussions nevertheless show that potential “privacy-friendly” solutions will not normally change where it is possible to use video monitoring of people – e.g. on a public street. The option of data-minimising measures would not lower the threshold of what is considered legal video monitoring in cases where the enterprise does not have a legitimate interest to pursue or a legitimate purpose for monitoring.

The presence of functions in monitoring systems that make processing of personal data less invasive will also require enterprises in the security industry to assess, to a greater extent, the types of personal data they need to be processing to achieve the purpose of the monitoring.

Increased complexity – increased vulnerability?

The security issues we have discussed in this report are not exhaustive. Intelligent video analytics could lead to more complex solutions than more traditional alternatives. This could, in turn, lead to greater threats against the security of the solution and increased risk to data protection. The complexity that intelligent video analytics may involve would indicate that enterprises wishing to transition to such solutions must increase their information security expertise.

In this sandbox project, Doorkeeper has discussed how they can incorporate security in the development, design, configuration and use of the solutions they offer. It will be essential for Doorkeeper to have an effective strategy for maintaining this focus in the future. For example, handling more customers and a wider range of configurations, will demand more resources and maintenance of a larger infrastructure.

New security threats will likely also emerge in step with the technological development. The Data Protection Authority recommends that both the enterprise and the Authority monitor the prevailing threat situation in the security industry.

Which developments should be monitored extra closely?

In the context of discussions in the sandbox, the Data Protection Authority would make particular note of the fact that edge computing – which in video monitoring entails that more of the video processing takes place within the camera body – is a field that may alter how personal data is processed in monitoring systems in the years to come. Among other things, it could lead to less personal data being stored by monitoring systems and that – in normal operating conditions – less data will be available to operators.

As intelligent video analytics is becoming more widespread, it is reasonable to expect more discussion of what societal consequences these technologically advanced monitoring systems will bring with them. This is not a focus of discussion in this report, but should be the topic of a wider debate.