Logo and page links

Main menu


How to succeed with transparency

Table of contents

Transparency when using AI in the workplace

Secure Practice is a Norwegian technology company that focuses on the human aspect of data security. In the sandbox, we took a closer look at a new service that Secure Practice is developing: the use of artificial intelligence (AI) to provide individually tailored security training to employees.

Starting with what interests and knowledge each employee has about data security enables training to be more targeted and pedagogical, and therefore more effective. This tool will also provide companies with reports containing aggregated statistics on employees’ knowledge and level of interest in data security.

In order to provide personalised training, Secure Practice will collect and collate relevant data on the client’s employees. The profiling will place each end user in one of several “risk categories”, which will determine what training they will receive going forward. Risk will be recalculated continuously and automatically so that employees can be moved to a new category when the underlying data so indicates.

Employee profiling can be challenging because of the asymmetric balance of power between employer and employee. Profiling can quickly be perceived as an infringement of the individual's personal privacy. In addition to examining the data flow and ensuring that the employer does not gain access to detailed information about the individual, transparency regarding the use of data was a key topic in this project.

Must employees be informed about the algorithm’s underlying logic?

The tool at the centre of this sandbox project falls outside the scope of the GDPR's Article 22, since it does not produce an automated decision that has legal repercussions for the employees. Accordingly, no duty to explain how the algorithm functions follows directly from this provision.

The project assessed whether the principle of transparency, read in light of GDPR Recital 60, could imply a legal duty to disclose how the algorithm functions. According to the GDPR’s Article 5(1)(a), the data controller must ensure that personal data are processed fairly and transparently. Recital 60 highlights that the principle of transparent processing requires that the data subject be informed of the existence of profiling and the consequences thereof.

GDPR Recital 60:

“The principles of fair and transparent processing require that the data subject be informed of the existence of the processing operation and its purposes. The controller should provide the data subject with any further information necessary to ensure fair and transparent processing taking into account the specific circumstances and context in which the personal data are processed. Furthermore, the data subject should be informed of the existence of profiling and the consequences of such profiling. Where the personal data are collected from the data subject, the data subject should also be informed whether he or she is obliged to provide the personal data and of the consequences, where he or she does not provide such data. That information may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner, a meaningful overview of the intended processing. Where the icons are presented electronically, they should be machine-readable.”

The European Data Protection Board highlights the importance of disclosing what consequences the processing of personal data will have and ensuring that it should not come as a surprise to those whose personal data are processed.

Although the use of personal data in this project does not trigger a legal obligation to explain the system’s underlying logic, it is good practice to act in such a transparent fashion that the user is able to understand how their data are used. Transparency about how Secure Practice’s tool works can help to engender trust in the AI system.

User involvement – how to build trust through transparency

What information is it relevant to give employees who will make use of the tool, and when should that information be provided? Two focus groups were established to examine these questions. One focus group comprised employees of a major Norwegian enterprise, while the other comprised representatives from a trade union organisation.

One of the questions discussed in the focus groups was whether the employee should be given an explanation of why the algorithm presents the individual user with precisely this proposal. For example, why is an employee encouraged to complete a specific learning module (“because we see that you did not do so last week”) or take a quiz on cyberthreats (“because we see you have completed the module and it can be a good idea to check what you remember”).

A specific example could be an employee receiving a suggestion to complete a certain type of training because they had been caught out by a phishing exercise. The focus group discussed whether such detailed information might make the user feel they are being monitored, which could in turn lead to a loss of trust. The arguments indicated otherwise: most agreed that providing this type of information was a good idea because it would help the employee to understand how the information was used and because such transparency could engender trust in the solutions.

Focus group members also discussed which types of data were relevant to include in such a solution, and which data should not be included. The system can, potentially, analyse everything from which learning modules have been completed and the results of post-module quizzes, to how the employee deals with suspicious emails. It can also analyse more sensitive information from personality tests.

The focus group made up of employees from a major Norwegian enterprise was, in principle, prepared to use the solution and share fairly detailed data, provided that it made a constructive contribution to achieving the goal of better data security in the company. It emerged that they trusted their employer to safeguard their privacy and not to use their data for new purposes that could have a negative impact on them.

The focus group made up of trade union representatives emphasised the risk of the individual employees’ answers being traced back to them, and of the employer being able to use the data obtained through such a tool for new purposes. They were, for example, concerned that the employee could be penalised for a low score, either through the loss of a pay rise or other opportunities within the organisation. They pointed out that transparency is a precondition for employees being able to trust the solution.

The focus group participants emphasised the importance of clear and concise communication with the employees. Uncertainty surrounding how the data will be used increases the risk of the employees adapting their answers to what they believe is “correct” or being unwilling to share data. This is an interesting finding because the algorithm becomes less accurate if the data it is based on are inaccurate and do not represent the user's actual situation.

The focus group comprising trade union representatives felt it was important to clarify early in the process how the data would be stored and used in the company's work. They further argued that the contract between Secure Practice and the company be framed in such a way as to protect employee privacy, and that it was important to involve the employees or their trade union representatives at an early stage in the procurement process. In their opinion, such a solution could be perceived differently by employees, depending on the situation. The extent to which employees trusted their employer could, for example, have an impact.

To minimise this risk, the focus group warned against formulating questions in such a way that the answers could harm the employees if their employer became aware of them. They called for the omission of questions about whether the employee had committed any security breaches, and for communication with the individual to be framed in a positive way, so that the user felt supported and guided, not profiled and criticised.

Secure Practice used the insights provided by this user involvement exercise to adjust the way the solution provides information to the end user. In addition to smart initiatives regarding transparency, the sandbox project gave Secure Practice input on how it can protect the individual employee’s statutory rights by ensuring that personal data from the solution cannot be used for new purposes. Read the sandbox report for further details.

Veileder navigasjon

4. Transparency when using AI in the workplace