Logo and page links

Main menu


Doorkeeper, exit report: Intelligent video monitoring with data protection as a primary focus

About the project

AS Doorkeeper is a Norwegian start-up, established in 2021. The enterprise develops, manufactures and sells security solutions, such as access control and video monitoring. This sandbox project has focused solely on video monitoring.

What is Doorkeeper’s service?

A primary reason for why Doorkeeper was selected as a participant in the sandbox was that they want to use so-called intelligent video analytics to market more privacy-friendly video monitoring. Intelligent video analytics is technology that automatically analyses content from video monitoring cameras.

To achieve better data protection, Doorkeeper wants to use artificial intelligence and machine learning to censor identifying data in the video feed. This includes faces, bodies, licence plate numbers, text and logos. They also want to configure the solution to refrain from making continuous recordings when no pre-defined events are taking place. There will be some temporary storage taking place in the camera body, but these files will later be automatically deleted and will normally not be made available to the operator.

Intelligent video analytics can be configured in many different ways to customise the monitoring for the defined purpose. In the sandbox, we have limited our discussion to primarily focus on three functions within the solution:

  • Censoring of identifying data from the video feed in real time
  • Registration and notification of fire and smoke
  • Registration and notification of criminal activity

Doorkeeper wants to be a one-stop provider. This means they will provide all parts of the solution: cameras, networks, monitoring platform, configurations, control centre and operators. Among other things, the purpose of this is to ensure that the service is used in a way that protects the privacy and security of the persons being recorded.

How does the solution work?

Below is a description of how Doorkeeper’s solution for intelligent video analytics works. This description serves as a basis for the consideration of legal issues later on in the report.

When this report was prepared, the solution was still in the developmental stage. The description below may therefore differ from the final solution.

A step-by-step explanation of the solution

The solution is comprised of several components: video monitoring cameras, an interface for intelligent video analytics and a monitoring platform (“video management system” - VMS). The monitoring platform collects a video feed from the camera and provides the operator with a user interface. The operator monitoring the video feed does so via this platform.

Doorkeeper has two alternative set-ups for the solution. The most important difference between these alternatives is where the analysis takes place:

  1. One uses Doorkeeper’s proprietary cameras. These cameras have a processing unit built-in, and this processing unit is designed specifically for use with artificial intelligence. This means the camera is able to analyse and process the video feed in real time. As a result, faces and human shapes can be censored locally, within the camera, and this minimises the transfer of personal data from the camera to the platform.
  • A log is stored both locally in the camera body and in the platform. This log specifies when events are detected, when the censoring is removed, and which user has logged on to the system.
  1. The second alternative entails the use of cameras not provided by Doorkeeper. This could be an option for enterprises that already have video monitoring equipment from other providers installed. For these solutions, the intelligent video analytics will take place on the platform. This means the video feed transferred from the camera to the platform will contain an uncensored and “raw” video feed.
  • As Doorkeeper will not be able to control any logging of activity in third-party cameras, the logging will only take place on the platform.

Once the customer has selected an alternative (1 or 2), the monitoring system is configured. We have limited our discussion to the alternatives we have discussed in the sandbox project:

  1. First, the enterprise must configure the censoring function. They can choose to censor:
  • Faces (the solution will identify faces in the video feed, and these will be censored)
  • People (the solution will identify human shapes in the video feed, and these will be censored in their entirety)
  • Licence plate numbers for vehicles (the solution will identify vehicle licence plates, and these will be censored)
  • Text (any moving surfaces with text on them will be identified and censored)
  1. The enterprise must then configure which events the solution should detect. This could include:
  • Fire and smoke
  • When someone crosses a pre-defined “line” (if a person accesses an unwanted location – e.g. a train track)
  • Objects (the presence of potentially dangerous objects – e.g. weapons, propane tanks or petrol cans – in the video feed)

When the interface detects one of the pre-defined events, this will trigger a signal to remove the censoring. The system will then initiate a recording and transfer with permanent storage on the platform. At the same time, a notification will be sent to the operator.

If the operator believes that an event has occurred and the solution has failed to identify it, the censoring can be deactivated manually. In such cases, the operator must specify a reason for removing the censoring, the time period for which to remove it, and their own unique user ID. This applies both to censoring taking place in the camera body and censoring on the platform.

In order for the system to be able to identify events and people in the video feed, the algorithm must be trained. Doorkeeper handles this by feeding the algorithm with images from commercially available databases. Action patterns and analytics are then defined. In the same process, Doorkeeper performs manual adjustments of training parameters, to ensure the algorithm produces more accurate results. Doorkeeper does not train its algorithm with recordings – or other types of data – from its cameras.

For both alternatives, Doorkeeper will cache the last few minutes of video, so that any recordings that are permanently stored will also include the minutes before the event takes place. This cache is deleted continuously, and the operator does not have access to the cache memory. The duration of cache storage will be assessed based on the specific event, but should always be as short as possible. Recordings cannot be deleted manually by the operator, but will be stored for a pre-defined time period.

Doorkeeper aims to communicate clearly about the use and presence of their cameras. Among other things, they plan to have a red front on their camera bodies for easy visibility, as well as signage to inform about the use of “intelligent monitoring”.

Solution security

To ensure communication between the cameras and the platform, Doorkeeper plans to, among other things, encrypt the video feed and use dedicated networks for the transfer. Doorkeeper will also have some control over how the solution is configured, to better ensure it is being used in a privacy-friendly and secure way. We will return to this in the chapter on security issues later in this report.

When installing and using their services, Doorkeeper's customers must comply with a number of requirements. Among other things, the customer must use an installation service with ecom networks authorisation (see the National Communications Authority website). Furthermore, the system must be configured in accordance with Doorkeeper’s specifications.

One obvious weakness of censoring algorithms is that the operator controlling the cameras can arbitrarily turn off or change the censoring level and criteria, or activate the recording function when there has been no event. Doorkeeper has therefore considered limiting customer access and requiring contact with a control centre to make changes.

Which types of data will the solution register?

There is no doubt that Doorkeeper processes personal data, even though identifying data will be censored in the video feed.

In Alternative A, the video feed is censored directly in the camera, before it is forwarded to the platform. This means that a lot of the identifying data will only be processed by the camera in a normal operating situation – i.e. when an event has not been detected. An uncensored recording is also stored temporarily within the camera (cache memory). This is deleted after a predefined interval, e.g. five minutes.

Doorkeeper has implemented measures to ensure that operators do not have direct access to the recording stored in the camera body. Operators will therefore only have access to the censored video feed. When the camera detects a predefined event, the censoring is removed from the video feed and a recording is initiated and sent to the platform.

The recording made after an event has been detected will include the minutes of uncensored recording stored in the camera. This will enable the operator to see what happened in the minutes prior to the event.

In Alternative B, the solution will process the same types of data as in Alternative A. The difference is that the censoring takes place on the platform. This means that an uncensored video feed must be transferred between the camera and the platform. Nevertheless, operator access will be the same as for Alternative A.

When a censored feed is available, it cannot be ruled out that the video feed may still be linked to an individual. One example is when the camera captures a person who is relatively tall, and who passes by the same place at approximately the same time every day. Furthermore, individuals may also be identifiable based on context (where they are at a given time), if data is used in combination with other sources (e.g. other video monitoring without censoring or where other types of data are collected, and this data can be linked to a natural person – e.g. where logs are registered).

 It may also be possible to derive personal data from text or logos shown in an image, e.g. when a company name or logo on a vehicle can be linked directly to a natural person - provided this is not censored.