Logo and page links

Main menu


Secure Practice – exit report

About the project

Secure Practice is a Norwegian technology company that focuses on the human aspect of data security work.

Their cloud services are used by over 500 companies, with end users in more than 30 countries. One of the services they currently offer is MailRisk, which helps employees to recognise whether suspicious emails are dangerous or safe by using artificial intelligence. Secure Practice also develops and supplies integrated services for e-learning and simulated phishing.

Tailor-made safety training

Now Secure Practice wants to use artificial intelligence to provide personalised security training to employees at their clients’ companies. Starting with which interests and knowledge each employee has about data security enables training to be more targeted and pedagogical, and therefore more effective. This tool will also provide companies with reports with aggregated statistics on employees’ knowledge and interest level in data security. The reports will make it possible for the employer to monitor development over time, and at the same time identify specific risk areas and uncover any need for collective measures.

In order to provide personalised training, Secure Practice will collect and collate relevant data on the employees at the client’s company. The profiling will place each end user in one of several “risk categories”, which will determine what training he or she will receive in future trainings.. Risk will be recalculated continuously and automatically so that employees can be moved to a new category when the underlying data dictates this.

The development of the tool is based on a range of scientific studies related to human safety attitudes. Based on these studies, Secure Practice has identified some factors among employees that need to be mapped. Attention has been focused on developing a flexible statistical model and technological solution for processing and linking various data in multiple dimensions, including time. With this as a starting point, the risk assessment itself can be done equally as flexibly, based on which hypotheses form the basis in the model at any given time. These are assumptions that are thus programmed in advance, and the “intelligence” is in the first instance a product of the quality of the hypotheses.

It will also be interesting to be able to use machine learning on data from a historical perspective. So-called learning can be done on the basis of usage metrics to identify patterns for improvement, or possibly deterioration. This will then be able to contribute to improving the hypotheses and developing even more accurate measures and recommendations in the service in the future.

Secure Practice has been working on the new service since Innovation Norway granted funding for the project in 2020. The Research Council of Norway has also granted funding to further develop the theories behind the tool, through a research project with the Norwegian University of Science and Technology. At the launch of the sandbox project, Secure Practice had a theoretical model in place, and technical implementation of the key risk model was within reach. And because the new service is integrated into the existing service platform, a lot of the tool is already complete. At the same time, Secure Practice has still had a number of open-ended questions about both data collection and user interface to consider.

Goals of the sandbox process

The sandbox process is divided into three sub-goals, each with their own deliveries. The three sub-goals have been organised thematically around the three central roles that arise when the service is used: the employers, Secure Practice and the employees.

  1. Data controller. Who is responsible for complying with the privacy policy? Is it the operator of the service, i.e. Secure Practice, the company that introduces the service to the workplace or are they both joint data controllers? Is the answer the same during the usage phase as when further development of the tool takes place during the learning phase?
  2. Legality. Can the tool be used and further developed legally? The project will clarify what legal basis the data controllers may have to profile employees in order to offer individually adapted security training for companies and statistical reports to companies. The project will also consider whether such profiling is subject to the prohibition against monitoring in the Email Monitoring Regulation.
  3. The data subject. How does the tool affect the employees? The project will clarify how the data subject is affected with respect to what data forms the basis for the processing, the risks of such processing, fairness, transparency and procedures for exercising the data subject's rights.