Logo and page links

Main menu


Doorkeeper, exit report: Intelligent video monitoring with data protection as a primary focus

Assessment of three examples of use

In the sandbox project, we have explored how Doorkeeper’s technology may affect the assessment of the legal basis for processing in connection with video monitoring. To highlight some of the relevant issues, this chapter takes a closer look at three examples of Doorkeeper’s technology in use.

These examples were chosen because they represent potential areas of application for Doorkeeper’s solution. At the same time, they raise issues that are transferable to other enterprises that may be developing similar solutions.

The examples we have chosen to explore are:

  1. Video monitoring of the exterior of a commercial building for the purpose of preventing theft
  2. Video monitoring of a street for the purpose of protecting the enterprise’s reputation
  3. Use of video monitoring to detect fire and smoke on the exterior walls of a stave church

The first example is a situation where, in many cases, there will currently be a legal basis for video monitoring. In this example, we therefore discuss the role Doorkeeper’s technology plays in the assessment of legal basis in these situations.

The second example is a situation where, as a rule, it is currently not lawful for a private party to use video monitoring. In this example, we discuss whether Doorkeeper’s technology could make it possible to use video monitoring to a greater extent than what is currently permitted.

In the third example, we describe a situation where the use of video monitoring without censoring and continuous recording would not meet the requirement for necessity in an assessment of legal basis. We then discuss whether use of a type of video monitoring – such as the system offered by Doorkeeper – could still meet this requirement.

What all examples have in common is that recordings are only stored temporarily for a short period of time, unless a predefined event is detected.

In the following, we assess Doorkeeper’s technology in the context of “legitimate interest” as a legal basis for the monitoring, and the impact of the technology on considerations of necessity in the context of legal basis (see Article 6 of the GDPR). The examples also all involve setting up video monitoring equipment in places frequented by the general public. Several aspects of the consideration will be the same for all examples.

A general issue is the role Doorkeeper’s technology plays in the outcome of the assessment of legal basis, and whether this differs from video monitoring without the equivalent censoring function and with continuous recording.

The examples are not an exhaustive review of all aspects that would be relevant in a consideration of legal basis, but they focus on aspects we believe are especially relevant in light of Doorkeeper’s technology.

For general information about the legality of video monitoring, see the Data Protection Authority’s guide here [/personvern-pa-ulike-omrader/overvaking-og-sporing/kameraovervaking/].

Example 1: Video monitoring of the exterior of a commercial building for the purpose of preventing theft

The first example involves video monitoring of the exterior of a commercial building on a pedestrian-only street. In this example, the data controller is a private enterprise, which owns the building.

The purpose of the monitoring is to prevent and give notice of theft. The solution will handle this by registering and sending a notification if somebody breaks the shop window and enters the premises. When the service provider installs the camera, they angle it toward the front of the building. The solution is configured so that human shapes are censored in the video feed.

In a normal situation, the camera does not make recordings in the camera (with the exception of time-limited storage in cache memory) or on the platform. Recordings will only be activated if the solution detects the shop window being broken.

Assessment

In many cases, video monitoring will be permitted even when continuous recordings are made and human shapes are not censored from the video feed. The question in this example is therefore whether Doorkeeper’s solution impacts the assessment of the legal basis, and whether the choice between censoring in the camera or on the platform is relevant in this assessment.

Prevention of crimes targeting the enterprise may be a legitimate interest, and the prevention and notification of break-ins and theft may be legitimate interests the data controller can pursue. This assessment, however, will be contingent on the risk of such events being real – if not, the interest of protecting against such events will also not be real.

Video monitoring must be limited to what is necessary in terms of time and space. If the risk of break-ins is only real after the store is closed, no video monitoring can take place during the shop’s opening hours. As a main rule, only public authorities may monitor public spaces, but it may be permissible for private parties to record an insignificant part of a public street if necessary, such as right outside the front of the shop.

Prior to installing video monitoring equipment, the data controller must consider alternative measures. Alternative measures that may be appropriate for achieving the purpose include physical protection, security guards and other types of alarm systems that do not entail the processing of personal data. If the alternative measures are not suited, the data controller must also consider which type of camera technology is necessary to achieve the purpose. In this example, it will only be necessary to capture humans when a breach of the premises has been detected.

Since the camera will capture and give a notification if the premises have been breached, it may be suited to achieving the purpose. This indicates that the use of Doorkeeper’s solution may meet the requirement of necessity. The solution also makes it easier to only process personal data to the extent that is necessary for the purpose, as human shapes are censored immediately, unless they are captured when an alarm has been triggered.

When the camera, which is aimed at the front, is located outside the shop, on a pedestrian-only street, it may potentially capture many people, including passers-by. This can, to some degree, be limited by angling the camera and/or censoring areas where no monitoring is needed. Even so, the scope of data subjects is, in all instances, unpredictable, and may change character depending on the time of day, day of the week and whether there are other activities or events going on nearby.

In a pedestrian-only street, the data subjects will have some expectation of being captured by video monitoring inside the shop during opening hours, but also, to some degree, in the immediate vicinity outside at other times of the day. This could indicate that the use of video monitoring equipment installed outside, and facing towards the shop, in a pedestrian-only street, would not normally be considered particularly invasive.

Doorkeeper’s solution will also, in practice, be less invasive for the persons captured by the camera, compared to a camera that continuously captures recordings that do not censor human shapes. For most people captured by the video monitoring, the processing of personal data will be very limited. However, in this example, Doorkeeper’s data-minimising solution may not have a significant impact on the data subjects’ experience of being watched and their perception of how invasive the monitoring is. Monitoring in a pedestrian-only street makes it less likely that passers-by and customers are aware of the type of camera technology being used, even if signs are posted with information on how the solution works. There will thus not necessarily be any correlation between the data subjects’ experience and the actual processing of personal data.

Summary

In this example, it will be appropriate to configure the camera to ensure that the measure better complies with the data minimisation requirements. The requirement of necessity imposes a duty on the data controller to select the least invasive technology suited to achieving the purpose of the monitoring. When more privacy-friendly technology becomes available, it also changes the perception of which processing of personal data is necessary for achieving the purpose. The technology, however, does not necessarily impact the data subjects’ experience of being watched, and the invasion of privacy must therefore be considered on an equal footing with other video monitoring solutions in terms of the experience of being watched.

Example 2: Video monitoring of street for the purpose of protecting the enterprise’s reputation

As an extension of the previous example, the enterprise also wants to expand the monitoring further out into the pedestrian street. The purpose of this is to prevent, discover and notify of unwanted events that are not directly targeted at the enterprise, but that may affect the enterprise’s reputation and financial interests. This could include activities that make the area where the enterprise is located feel less safe – and therefore less attractive to potential customers.

In this example, the purpose can be achieved by having the camera technology recognise objects like weapons (such as large knives), in addition to the general preventative effect of having visible video monitoring in the area.

Assessment

The question here is whether Doorkeeper’s solution, which includes object recognition and censoring of human shapes, would entail that the enterprise has a legal basis for monitoring further out into the street, for purposes that are more general in nature.

In isolation, keeping the area around the enterprise safe could constitute a legitimate interest if the risk of unwanted events is real and provided the enterprise is directly or indirectly affected. The prevention of unwanted events or crime in public spaces, however, is a responsibility vested in public authorities, primarily the police. This could indicate that keeping order in a public place is not a legitimate interest that may be pursued by a private enterprise, even if the enterprise is negatively affected by crime or other unwanted events in the area.

The main rule is also that only public authorities may monitor public spaces. The police have the authority to use video monitoring for police purposes, including for preventing and stopping criminal activity. Doorkeeper’s solution cannot have an impact on the assessment of whether the enterprise has a legitimate interest or a legitimate purpose for the monitoring.

In this example, the assessment is that video monitoring may be necessary to achieve the purpose, but that the monitoring nevertheless will not be permitted as long as the purpose of it does not constitute a legitimate interest for the enterprise to pursue.

Data subjects may have an expectation for the street to be monitored, but they would expect it to be monitored by public authorities, not by businesses located there. This could indicate that the rights and interests of the data subjects take precedence over the enterprise’s interests in monitoring.

Summary

Choosing a data-minimising technology would not lower the threshold of what is considered legal video monitoring in cases where the enterprise does not have a legitimate interest to pursue monitoring.

Example 3: Use of video monitoring to detect fire and smoke on the exterior walls of a stave church

In the final example, we consider the use of video monitoring of the exterior wall of a stave church for the purpose of triggering an alarm in case of fire.

In this example, the purpose of the video monitoring is to detect fire and smoke in order to quickly extinguish a potential fire and prevent damage to the building. The camera will function as a sensor that can detect flame or smoke patterns, and it will be configured to censor human shapes in the video feed. The camera solution can be configured to ensure that the censoring cannot be removed and the video feed is not cached.

While the purpose in this example is not to capture persons on the video feed, the monitoring will still entail processing of personal data which requires a legal basis pursuant to the GDPR. That is because people who move around near the church may be captured by the camera.

For this example, it would be useful to consider various legal bases. Whether the data controller is a public body or a private enterprise will be relevant in this consideration. Regardless of which legal basis the processing is based on, the processing must be “necessary”, see the data minimisation principle. In the following, we discuss the necessity requirement and thereafter provide some comments on the balance of interests (in accordance with Article 6 (1) (f)). We assume that extinguishing fire and preventing damage to a stave church is a legitimate interest in accordance with Article 6 (1) (f).

Assessment

In this example it is especially interesting to assess whether Doorkeeper’s solution can be considered “necessary” in a case where a video monitoring solution without censoring and with continuous recordings will not meet the requirement for necessity.

A solution without censoring and with storage of recordings will, in most cases, not meet the necessity requirement, because the purpose can reasonably be expected to be achieved effectively by other and less invasive means. As described in the examples below, Doorkeeper’s technology will process personal data to a much lesser extent than a solution without the same censoring functionality that stores continuous recordings. The processing will therefore be less invasive for the data subjects.

A camera without alarm functionality will also not be as well suited for the purpose of the processing. If such a camera is to be used to detect smoke or fire, it will require one or more persons continuously monitoring the video feed to identify potential smoke developments. This will likely not be very practical, and it would be highly invasive for the data subjects.

For the purpose of detecting fire or smoke, a smoke detector or other sensor would be obvious alternatives. A fire and smoke-detecting system that involves the use of video monitoring will be more invasive for the data subjects than the use of smoke detectors or other sensors. That is because the video monitoring solution is processing personal data. However, if Doorkeeper’s solution is still better suited to achieving the purpose than less invasive alternatives, the necessity requirement may be met. A key question therefore is whether Doorkeeper’s solution is better suited to detecting fire and smoke than a sensor-based fire detection system.

According to Doorkeeper, it is difficult to direct fire and smoke into a sensor if the fire is on the exterior wall. Smoke detection is often triggered too late, because the smoke does not accumulate until the fire is well under way or has burned out. Heat-detecting cables cannot be installed on heritage listed buildings, and this is also not an effective method for detecting fire. According to tests commissioned by Doorkeeper, it will take 8 minutes to trigger a cable-based sensor, whereas it will take six seconds for Doorkeeper’s camera solution. This could make a huge difference in saving the building. The Data Protection Authority has used Doorkeeper’s description of this functionality for our assessment.

In this example, we found that Doorkeeper’s camera-based solution could meet the necessity requirement, because the solution could provide significantly more effective fire detection than the described alternatives, such as smoke detectors or similar sensors. Therefore, use of the type of camera technology described by Doorkeeper could meet the necessity requirement in a situation where a video monitoring solution with continuous recording and without censoring likely would not.

In terms of the balancing of interests, protecting a stave church from fire could be considered a very important interest. At the same time, it could be seen as invasive to be subject to video monitoring when visiting a church – a place where the data subjects would not expect this to the same degree as when visiting, for example, a shop. Nevertheless, the data subjects may also have an expectation of a stave church being protected from fire and other damage, and that this could entail video monitoring. The data minimisation in Doorkeeper’s solution entails that the actual processing of personal data will be very limited, and this could make it easier to conclude that installation would be appropriate. In this example, therefore, the data minimisation measures of Doorkeeper’s solution could tip the scales in favour of video monitoring.

Summary

Use of the type of camera technology described by Doorkeeper could likely meet the necessity requirement in a situation where it is used to detect fire on the exterior wall of a stave church. This presupposes that the solution is better suited to achieving the purpose than other, less invasive solutions, such as a smoke detector.

The data minimisation measures in Doorkeeper’s video monitoring solution mean the processing of personal data will be very limited.

In this example, the balance of interest may come out in favour of video monitoring.

How will the choice of design influence the assessment of the legal basis?

In all examples of use, the legal assessment could be influenced by the design of Doorkeeper’s solution. Their solution may be designed in two different ways: by having the censoring of human shapes and other identifying data taking place either in the camera body or on a platform. The data controller must consider the impact of the differences in the different solutions on how invasive the processing of the personal data will be for the data subjects in the specific case.

While the two alternatives can be set up with the same level of security, the vulnerability will be higher for the solution where the censoring takes place on the platform, compared to the solution where censoring takes place in the camera body. The platform solution will entail processing of personal data in more steps than the solution where the censoring takes place in the camera body. That is because when the censoring occurs on the platform, data will be transferred from the camera to the platform before being censored. A solution with censoring in the camera body – which would involve fewer steps – will therefore, in some cases, be considered less invasive than the solution where the censoring takes place on the platform. Among other things, this is because more people will have access to the platform.

Furthermore, one might imagine that if the data subjects are aware of the solutions, they would perceive a solution where the video feed is censored in the camera body as being less invasive, compared to a design where the video feed is transferred to a platform before the censoring takes place. This, however, requires the data subjects to be well-informed, and will likely be most relevant in situations where the camera is installed in an area where the data controller has some control over making sure sufficient information is provided to the persons entering the area.

The design of the solution may therefore have an impact on the assessment of the legal basis. The risk of the personal data being processed in a manner that was not intended, will be another factor in the overall assessment of which of the two solutions to implement. In each specific case, the data controller must assess whether the difference between the designs would indicate that one alternative is preferable over the other.

Special categories of personal data

One particular issue associated with the use of video monitoring is that it could be difficult to know in advance which types of personal data one will be processing. In all examples of use, there is a risk that special categories of personal data may be processed. The term special categories of personal data includes information apt to reveal racial origin or ethnicity, political opinions, religious or philosophical beliefs, labour union affiliation, as well as treatment of the genetic data, biometric data that lead to the certain identification of natural persons; information pertaining to health, sexual lifestyle and sexual orientation (see Article 9 of the GDPR).

Before a data controller installs a video camera, it is important to consider whether the processing may entail processing of these types of personal data. This is particularly relevant for the example where a camera is installed outside a stave church, where religious services are being held. In this example, it would be relevant to consider whether the processing will include information about the data subjects’ religious affiliation. In line with the data minimisation principle, the data controller should explore whether the camera could be angled to prevent, insofar as possible, the processing of personal data.

The processing of special categories of personal data is normally prohibited. If it is found that special categories of personal data are included in the processing, the processing must meet one of the exception criteria listed in Article 9 (2) for the processing to be lawful.

In its guidelines from 2020, the European Data Protection Board (EDPB Guidelines 3/2019, p. 17) specifies that while video monitoring is suited to collecting vast quantities of data, this will not necessarily entail the processing of special categories of personal data. If video recordings are processed for the purpose of detecting special categories of personal data, Article 9 will apply.

In August 2022, the European Court of Justice (ECJ) (C-184/20) issued a ruling that applied a considerably wider interpretation of what is considered special categories of personal data pursuant to Article 9. In this case, the ECJ concluded that information about a natural person’s sexual orientation could indirectly be deduced based on information about the spouse’s name, and that this was included in the term “special categories of personal data” in Article 9.

Among other things, the ECJ noted that it must be determined whether data which “...by means of an intellectual operation involving comparison or deduction” could reveal this type of information (see paragraphs 120 and 123). The ECJ also noted that the objectives behind the GDPR support a broad interpretation of the term “special categories of personal data”, in that it “ensure[s] a high level of protection of the fundamental rights and freedoms of natural persons, in particular of their private life” (see paragraphs 125 and 127). At the same time, the ECJ appears to have placed a strong emphasis on the context of the case in question.

With regard to this, the ECJ found that the term “special categories” in Article 9 is rather broad. At the same time, the ECJ did not provide any clear guidance on how to carry out the specific consideration or which outcomes the consideration may have in other types of situations where personal data is being processed. It is uncertain where the line is drawn for the types of processing that should be considered to indirectly reveal special categories of personal data and thus trigger the application of Article 9. Nevertheless, it is relevant to take this ruling into account when a data controller is considering the use of video monitoring. The Data Protection Authority recommends that data controllers monitor legal developments in this regard.