In this chapter, we discuss some security issues relevant to the type of technology Doorkeeper wants to use, and offer some general comments on the legal requirements for security.
Security is a vast topic, and in many cases, it is a prerequisite for good data protection. It is difficult to ensure data protection without satisfactory security. In this report, we are unable to cover the topic of security in any great detail. We have therefore only included what was discussed in the sandbox sessions with Doorkeeper.
Legal requirements for data protection
Article 32 of the GDPR regulates requirements for security of processing. Both Doorkeeper and their customers have a duty to “implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk”. Which measures are necessary for compliance with the legislation will therefore vary in line with the level of risk the enterprise is facing and the general level of risk in society. As a provider and developer of the solution, Doorkeeper should ensure they are offering a very secure solution.
Doorkeeper must furthermore ensure that the security of the solution is enduring, both in terms of the solution’s security remaining up-to-date with technological developments, and in terms of continuously addressing any vulnerabilities. A good way to ensure enduring security is to be ahead of developments – always remaining abreast of what is considered best practice and continuously implementing best practices in the products one is developing.
The law imposes both technical and organisational measures. “Organisational” must be interpreted in the broad sense, and may also include physical measures. Therefore, to protect the confidentiality of persons subjected to lawful video monitoring, a wide range of measures may be needed. Examples of such measures include:
- protecting the room where monitoring is taking place, to prevent unauthorised persons from accessing the room or being able to see the screens from the outside
- develop procedures for use of the system
- ensure adequate training in how to manage events
- ensure comprehensive training of users
- require users to sign a declaration of confidentiality
- define the access of user accounts
- develop procedures for regular reviews of logs
Protecting communication between the camera and the platform
One topic of discussion in the sandbox sessions was protecting the communication between the camera and the platform in Doorkeeper’s solution. Will the two alternatives for configuration of the solution impact which security measures Doorkeeper must implement for GDPR compliance?
If the video feed from the camera does not include data relating to identifiable individuals, the GDPR will only apply to the processing taking place inside the camera (and not to the communication from the camera to the platform). However, this presupposes that the camera only sends out entirely anonymous data at all times, and this will likely not be the case for Doorkeeper’s solution.
The threshold for classifying data as anonymous is very high. Even if human bodies are completely censored, it cannot be ruled out that some censored shapes could be linked to an individual. For example, a relatively tall person passing by the same location at approximately the same time every day.
As it will likely be impossible to guarantee full anonymity from censoring, and that, even for alternative A, the camera must at times be able to transfer uncensored video, the communication between the camera and the platform should, as a starting point, be protected as if personal data is always being transferred. However, this does not appear to be a major challenge, because most modern cameras have some form of communication protection built-in, such as in the form of transport layer security.
In solutions where no encryption is used, the data controller – and any data processors – must compensate with other measures to ensure that no unauthorised persons can access or change the video feed.
It logically also follows that solutions with analytics and censoring functionality in the camera body (alternative A) will likely entail a lower risk than solutions where the same functions are provided on a platform (alternative B). This is because the video feed in alternative A is not transferred via a network before being processed, unless an event is detected and the censoring is removed. Unauthorised persons that gain access to the video feed will therefore usually only gain access to a censored video. By reducing the quantity of data transferred, vulnerability is also reduced, because an attacker will only gain access to uncensored video by gaining access to the video feed through the software in the camera itself. However, a lower risk associated with alternative A does not necessarily mean that alternative B is associated with unacceptable risk. The risk must be assessed in light of any other security measures that may be implemented. The same will apply to solutions without any form of video analytics or censoring.
It will be up to the data controllers and data processors to assess whether the built-in security measures in the cameras are sufficient. When the sandbox project started, Doorkeeper had already concluded that their solutions would use dedicated networks that were separate from those of their customers. The purpose of this was to avoid potential issues and risks associated with using networks that Doorkeeper was unable to control. The discussions we had in the sandbox led to Doorkeeper making additional changes to their solutions, which involves them wanting to use a VPN solution (Virtual Private Network) provided by a third party to further protect communication between the cameras and the platform. This generates end-to-end encryption between the camera and the platform, thus protecting communication even better than with ordinary transport layer security. This could increase security in both alternatives of Doorkeeper’s solution.
Regular product updates
In discussions in the sandbox project, Doorkeeper explained that cameras with known vulnerabilities that are not repaired is a problem in the security industry. Some enterprises continue to use these types of cameras without installing updates, despite the fact that the manufacturer has made such updates available.
If measures are not implemented to eliminate vulnerabilities, there will be greater risk and this could constitute a violation of the GDPR. It is also worth noting that the fewer links in the supply chain, the fewer entities one has to deal with in order to keep the products one uses updated.
A vulnerability in a VPN solution could result in unauthorised parties gaining access to an enterprise’s internal network. Similarly, a vulnerability in a camera may lead to unauthorised parties gaining access to the video feed. When attackers learn of vulnerabilities, they will generally try to exploit them as soon as possible. Manufacturers must therefore make updates available as soon as possible after learning about vulnerabilities, so that the level of security can be maintained. There could, however, be several reasons why a manufacturer does not provide updates for a specific product. For example, the manufacturer may have removed support for that product, the manufacturer may not be able to provide product support, or the manufacturer may have gone bankrupt. If it is not possible to obtain updates from the manufacturer, the data controller or data processor must assess whether the vulnerability can be mitigated by other means, or if the equipment must be replaced.
It is just as important to have mechanisms in place to ensure one stays informed about vulnerabilities in products one uses, as it is to actually update the products. If one is not aware of the vulnerability, one will not have the ability to do anything about it.
Access control to both the cameras and the platform in a video monitoring system is necessary to ensure that the video feed is only accessible to those authorised to view it, to prevent snooping or to prevent data from the monitoring being used for purposes other than what was originally intended. For Doorkeeper and its customers, this will, among other things, entail an assessment of who shall have access to the system and how extensive their access should be. This could, for example, entail a user’s access to
- monitor the video feed,
- manually remove censoring,
- review stored recordings,
- manually delete recordings initiated in error, or
- update cameras and other parts of the system
An operator whose job it is to monitor the video feed must necessarily have access to view the video feed, but they do not likely need access to install system updates. The CEO of the enterprise will likely not need access to anything other than the recordings stored after an actual event. The system administrator will need access to update cameras and other products.
The system should thus be designed to allow different access levels for each user or user group.
Doorkeeper has assumed that they will configure the solutions for their customers, to minimise the risk of the solution being used in ways that do not align with Doorkeeper’s intentions. Discussions on this topic also led to Doorkeeper considering setting up its own control centre, with its own operators, to ensure better control over how their monitoring systems are being used. Doorkeeper maintains that it is very important to them that the solutions they offer are used in a lawful and ethical manner.
What if the technology does not function as intended?
One applicable risk in the solution that Doorkeeper is developing is that the algorithm may trigger false positives or false negatives. If this occurs, the solution will remove censoring and initiate permanent storage of recordings without the occurrence of an event, or the solution will ignore an event it is supposed to detect.
False positives could entail that the processing of personal data is more extensive than what the data controller can lawfully process. On the other hand, false negatives could entail that the monitoring does not serve the purpose it is intended/trained to serve. Anyone who uses artificial intelligence must monitor any false positives and negatives, and continuously adjust the solution to make sure it functions as intended.
The legal considerations in this report are based on the description of the solution, as presented to us by Doorkeeper. The Data Protection Authority has not reviewed or tested the solution to see how the solution actually functions. If a solution is not functioning as intended, this could have major consequences for the legality of the monitoring. For example, if a solution has errors or defects that mean it collects more data than anticipated, this could constitute a violation of the data minimisation principle in Article 5.
Package suppliers – as opposed to suppliers of individual components – will often be in a better position to protect the solutions they offer. That is because they are better able to exercise control over the products that are included with the service.
Doorkeeper claims that they primarily want to be a service provider. This means that they want to be able to exercise considerable control over the cameras, networks and platform. If they opt to establish a control centre, they will also have control over this and the operators. Increased control could lead to a higher level of security, but it would also entail additional responsibility for Doorkeeper – a responsibility it is important that they are aware of.
By controlling the video feed from the time it is generated inside the camera until the operator can see it on a monitor, Doorkeeper has more control over communication security in all components. Doorkeeper will also have more control over whether the solution is configured in a secure and privacy-friendly manner, and can reduce the likelihood of the configuration being changed to one that is less secure. By establishing a dedicated control centre and hiring their own operators, Doorkeeper can ensure that the control centre is set up in a secure manner, and they can ensure that operators are trained according to their standards.
If Doorkeeper achieves its goal of becoming a service provider, it will be important for them to be aware of potential challenges that service providers face. For example, they will have direct control over more components than if they were simply a provider of camera equipment. This means they are responsible for making sure a large number of components operate securely. This challenge increases with the number of new customers, or if different customers need different configurations. It will be important for Doorkeeper to be aware of these challenges, and for them to make sure they have a control system in place that is capable of maintaining security for the entire service.