Logo and page links

Main menu

Secure Practice – exit report

Secure Practice – exit report

Secure Practice wants to develop a service that profiles employees with regard to the cyber security risk they pose to the organizations. The purpose is to enable a follow-up  with adapted safety training based on which profile categories the employees fall into. The project entered the sandbox in the spring of 2021. Here is the exit report.

Summary

Allowing yourself to be profiled can make life simpler and more interesting. It’s pleasant when streaming services get their suggestions right. It’s also undoubtedly more motivating if a course is tailored to exactly your level of knowledge and interests. Profiling can have major advantages both on a personal and societal level  However, in the digital age, seeking a more personalised life is a double-edged sword. The more precise the personalisation is, the more precise the personal data is about you, with a risk of abuse.

Profiling in the workplace can be particularly challenging since the relationship between employees and employers has an inherent power imbalance. Employees may find it invasive and insulting. They may feel they are being monitored and fear misuse of the information.

But is there a method that exploits the advantages of profiling while reducing or removing the drawbacks?

This is the starting point for this sandbox project, which examines a new service Secure Practice wants to offer the information security market. The service will use artificial intelligence (AI) to provide individual and personalised security training to employees in clients’ businesses.

People are unique, and security training is often too general to be effective. But with artificial intelligence, Secure Practice can offer personalised and therefore more pedagogical training. Both the business and the employees will benefit from better and more interesting training as well as from avoiding fraud and hacking.

Another purpose of the service is for the business to get an overview at a statistical level of knowledge and risk in order to prioritise better measures. The drawback is that individual employees could potentially perceive mapping as invasive. The sandbox project concerns how such a service can be made privacy-friendly.

Conclusions

  • Processing responsibility: Employers, Secure Practice and the parties have a joint responsibility to comply with the privacy policy. When the AI tool is used in the companies, the employers are the initial data controllers of the processing. When Secure Practice withholds information about which employee the tool is profiling, the employer and Secure Practice are joint data controllers. When the AI tool is developed further during the learning phase, Secure Practice is the sole data controller.
  • Legality: It is possible to use and develop the service within both general privacy regulations in the EU and special regulations on privacy in working life in Norway.
  • The data subject’s rights and freedoms: With innovative technology and the goal of predicting personal interests and behaviour, it was decided to carry out a Data Protection Impact Assessment (DPIA) for the project. Ultimately the assessment showed that the service has a low risk of discrimination.
  • Transparency: The project has assessed whether there is a legal obligation to explain the underlying logic in the solution. The conclusion is that there is no legal obligation in this specific case. The sandbox nevertheless recommends more transparency than what is legally required.

Next steps

This exit report is important for clarifying the ban on monitoring in the Email Monitoring Regulation. (Regulation on Employer's Access to Emails and Other Electronically Stored Material, 2 July No. 1108.) 

The Norwegian Data Protection Authority will share experiences from the project with the Ministry of Labour and Social Inclusion which is responsible for the Regulation and give the Ministry the opportunity to clarify the rules.