Image by Free-Photos from Pixabay
1. Introduction
In Summer 2021, Bloomberg published a striking series of stories from working people who had been ‘fired by a machine’. Stephen Normandin had worked for several years for Amazon Flex as a member of a fleet of ‘contract drivers’ who provide same-day delivery services for groceries and packages. Like other members of Amazon’s significant workforce, Stephen’s work was monitored through a system of real-time data collection and algorithmic analysis. Stephen’s performance rating plummeted after a series of unfortunate incidents made his job more difficult: inaccessible gated communities and lockers, unresponsive recipients and unhelpful responses from the company. Shortly after, he received an email stating that his contract had been terminated. Stephen took up the opportunity to appeal the decision but received a series of emails, each with a different name attached, that took him no further. The final email that Stephen received stated that the difficulties he had cited had already been taken into account. His termination stood, even though he was never able to interact ‘live’ with another human being.
In the European Union, workers have turned to the tools provided by data protection law to challenge these data-driven management practices. Uber drivers, for example, have launched claims to access their data and to challenge disciplinary practices that appear, from the individual’s perspective, to be undertaken entirely by an algorithm. Ola, another ride-hailing mobile app, has been ordered to explain to drivers how deductions from their wages are calculated by its algorithm. A crucial data protection right in these circumstances is Article 22 of the General Data Protection Regulation (GDPR), which heavily restricts when data controllers can make decisions based solely on automated processing.
Article 22 has also attracted the attention of the UK Government, however, as it seeks ‘a new direction’ for the data protection regime in the post-Brexit era. A Government consultation document published in September 2021 raises doubts regarding the future of Article 22. In the consultation document, the Government welcomes evidence on the operation of Article 22 but it also seeks views on the proposals made earlier in the year by the Taskforce on Innovation, Growth and Regulatory Reform (TIGRR). TIGRR made a strong recommendation that the right not to be subject to automated decision-making should be removed (see Proposal 7.2). For commentators in the labour sphere, the possibility that Article 22 may be removed or substantially reformed has raised the spectre of automated dismissal decisions. On TIGRR’s proposals, a decision to dismiss a worker would have to comply with the remaining data protection regime but no specific rights to challenge a decision or to human intervention would remain – a significant drop in the level of protection available to data subjects.
In this blog, I investigate what may happen when the traditional tools of labour law, here unfair dismissal law, step into the gap and regulate these cutting-edge management practices. With its strong requirements of procedural fairness, I argue that the right not to be unfairly dismissed will render fully automated dismissals unlawful. The need for an investigation and an impartial appeal against a decision to dismiss necessitate human interaction: a fair but fully automated decision to dismiss appears impossible to achieve. Unfortunately, however, the context in which data-driven dismissals occur most frequently draws our attention back to the serious deficiency that persists in the personal scope of some employment rights. Whilst the right not to be unfairly dismissed is an appropriate way to protect people from unjust automated decisions, it is unlikely to be available to those most vulnerable to these disciplinary processes.
2. Automated Dismissals and Data Protection: What Might We Lose?
Article 22 of the GDPR states that individuals have a right not to be subject to a decision that is ‘based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.’ Analysis of a person’s performance at work is expressly mentioned in Article 4(4) GDPR as an example of ‘profiling’ and a termination of employment that has legal effects for the individual data subject/employee. As a challenge in the Netherlands demonstrated, any meaningful human interaction within the decision-making process, such as Uber’s investigation into the potentially fraudulent activities reported by their software, will prevent reliance on Article 22. Only fully automated disciplinary or performance management procedures would be caught within Article 22.
The Information Commissioner’s Office (ICO) Guidance summarises when automated decision-making (ADM) is permitted and the associated obligations imposed upon the data controller.[1] This guidance draws on the GDPR (now the ‘UK GDPR’), which contains the bulk of the rights and obligations, and on the UK’s Data Protection Act 2018 (DPA), which adds detail in specific areas. From the data subject/employee’s perspective, they hold a number of rights that are useful in relation to ADM:
- If they make a subject access request regarding their data, a data controller must inform a data subject regarding the existence of automated decision-making and provide ‘meaningful information’ about the logic involved, as well as the significance and consequences of the processing for the data subject (see GDPR, Art.15(1)(h)).
- Under Article 22 and section 14 DPA, there are limited purposes for which ADM is permitted: either it is necessary to perform a contract between the parties, the data subject has given explicit consent, or the decision is authorised by law. Article 22 provides the right not to be subject to ADM outside of these purposes.
- Section 14 DPA sets out specific requirements where the processing is ‘authorised by law’, such as a decision to terminate a contract of employment that is authorised by common law and a statutory provision. In these cases, the data controllers must:
- Notify a data subject as soon as reasonably practicable that automated decision-making has taken place;
- Permit the data subject to request that the decision be reconsidered or that the data controller take a decision not based wholly on automated processing;
- Consider any request made, including evidence provided by the data subject, and inform the data subject of the outcome and the steps take to comply with their request.
There are additional rights held by data subjects that go beyond a discussion with the data controller. If the data subject believes that Article 22 GDPR and/or section 14 DPA has not been complied with, they may lodge a complaint with the ICO. The ICO has the power to investigate complaints and to issue an enforcement notice that specifies steps that the data controller must take or refrain from. This notice can be backed up by a penalty notice for non-compliance (see DPA, sections 149 and 155). A High Court or county court can also issue compliance orders, which have a similar purpose to enforcement notices. Under Article 82 GDPR, the individual has a right to compensation for material (financial) and non-material damage suffered as a result of a data protection breach. This compensation may be sought directly from the data controller or through a claim in the High Court or county court. Interestingly from an unfair dismissal perspective which excludes it, section 168 DPA includes “distress” within the scope of non-material damage.
From an employment law perspective, it is a mixed picture. The range of remedies is extensive after the fact of the automated termination, particularly if the ICO has the resources needed to launch an investigation quickly. However, one might also question whether these safeguards for an employee/data subject are adequate, given the significant impact of a dismissal upon an individual’s finances and private life. Particularly in disciplinary matters, employees need to understand the expectations and processes ahead of time in order to comply with them. If these processes are fully automated, GDPR imposes a duty upon the data controller to inform a data subject about ADM at the point when their personal data is obtained. This information is likely to be given at the start of an employment relationship, along with huge amounts of literature about the organisation, and may be contained in a data protection policy that many employees do not ever read. If a data subject later comes to suspect that ADM is occurring, they must use a subject access request under Article 15 to seek information. As Lilian Edwards and Michael Veale argue, however, this right ‘places a primary and heavy onus on users to challenge bad decisions’:
‘Even ordinary DP subject access requests (SARs) demand an enormous amount of time and persistence and, in reality, are mainly used effectively only by journalists and insiders who know how the company in question organizes its data processing systems.’
It could be argued that the data protection regime will only be effective for the most tech-savvy employees who understand their legal rights and the data systems being deployed by their employer.
Nevertheless, the TIGRR report targets two key aspects of the protection currently available to data subjects: (1) the “right to human review” and (2) the circumstances in which ADM can be used. The Taskforce’s primary recommendation is that the “right to human review”, seen in section 14(4) DPA and Article 22(3) GDPR, should be removed. If complete removal is too extreme, it could be replaced with a basic explanation of the data controller’s process. According to the Taskforce, the focus should instead be on whether ADM meets a ‘legitimate or public interest test’. The Government consultation document makes it clear that the general principle of lawfulness would continue to apply to automated processing, however the restrictions on the purposes for which ADM can be used (mentioned at 2. above) would be dramatically loosened.
Under TIGRR’s proposed regime, the purposes for which ADM could be used would be wider. For example, an employer could rely on their own ‘legitimate interests’ to justify automated data processing. The relevant legal basis, Article 6(1)(f) GDPR, relies upon three key stages:
- Purpose test: is the data controller/employer pursuing a legitimate interest in processing the data? A range of commercial interests, including the smooth running of the business by engaging in effective disciplinary practices, could be relied upon here.
- Necessity test: is the processing a targeted and proportionate way of achieving the purpose?
- Balancing test: do the data subject’s interests, fundamental rights or freedoms override the legitimate interest?
Both the necessity test and the balancing test should provide employers with pause before adopting automated processing. Under the necessity test, traditional management techniques with human input could be cited as a less risky way, in terms of data protection and achieving fair results, of disciplining workers. Under the balancing test, the employer’s interest in the smooth running of their business would have to be balanced against the employee’s right not to be unfairly dismissed, potentially alongside other rights such as the right to respect for one’s private and family life which can be impacted by a dismissal.
It is difficult to predict how a data protection authority such as the ICO would weigh up these questions of necessity and balancing. If an employer/data controller could convince the supervisory authority that their legitimate interests should prevail, the proposed amendments would give employers greater freedom to engage in automated dismissal practices but without the safeguard of a right to human review.
3. Regulated Automated Dismissals through Unfair Dismissal Law
Section 94 of the Employment Rights Act 1996 (ERA 1996) grants employees the right not to be unfairly dismissed. Once an individual’s eligibility to claim has been established (more on that issue below), the fairness assessment consists of the Employment Tribunal finding the reason for the dismissal and then testing whether the employer acted reasonably in dismissing for that reason. Across those two stages, there are points at which requirements of substantive or procedural fairness will present a serious challenge to the lawfulness of automated dismissal decisions. These requirements might be drawn from the case-law that interprets the question of whether an employer ‘acted reasonably’ in dismissing an employee or from the ACAS Code of Practice, which Employment Tribunals can consider in evidence in any dismissal for reasons of misconduct or poor performance.
i. Identifying the Reason and Explaining It
Under section 98 ERA 1996, it is for the employer to show the principal reason for the dismissal to the Employment Tribunal. Here, the employer must point to a “potentially fair reason” for dismissal, such as the employee’s conduct or capability, and avoid invoking any “automatically unfair reasons” that are listed in the statute. Given that algorithms might collate and process any number of data points in order to reach the recommendation that an employment relationship should be terminated, the need to identify a single, principal reason for dismissal is likely to present a challenge.
Some of the data points used may relate to the employee’s performance, such as metrics that track the time taken to perform assigned tasks, whereas others might be on the boundary between conduct and performance. An individual’s propensity to refuse to accept and perform particular tasks, for example, could be considered an issue of performance (are they able to perform the tasks necessary for their job?) or an issue of conduct (are they resisting a reasonable instruction by management?). Yet more data points could speak to anything or nothing at all about the way in which work is being performed. Customer ratings appear regularly in the list of data points used by employers. Whilst some poor ratings may be due to genuine performance issues, others may be entirely unrelated and simply circumstantial. Customer ratings may also be influenced by unconscious or conscious biases, a factor that may cast doubt on their value within a fair dismissal procedure more generally. The requirement to produce a principal reason for dismissal may in effect result in Employment Tribunals querying the algorithm itself, the weighting given to each data set and how those sets map on to the permissible reasons for dismissal such as poor performance (within capability) or misconduct.
An employer who is aware of this step of dismissal law may be able to configure an algorithm that relies on data points that clearly relate to a fair reason. In some cases, however, the employer may not have designed the algorithm being applied would not be in a position to “unpack” the data analysis that has occurred and demonstrate to the Tribunal how the recommendation to terminate was based upon the employee’s conduct or capability. The process may also be designed to optimise over time, with results that are not easy for an Employment Judge to inspect and comprehend. Alternatively, an employer may be unwilling to disclose the process by which the decision was reached, as occurred in recent litigation in a Bologna Labour Court in a case brought against Deliveroo Italia. In these cases, the employer may find that they have failed at the first hurdle: to show to the Employment Tribunal that their decision to dismiss was based upon a fair reason. A finding of unfairness would follow.
If an employer can isolate a principal reason for dismissal, there is an equally important obligation to explain that reason to the employee. This stage is expressed clearly in the ACAS Code of Practice: the notification of a problem (such as misconduct or poor performance) must contain sufficient information that the employee can prepare to answer the employer’s case at a meeting. Where appropriate, employers should share the written evidence against the employee – perhaps translating to a requirement to share the data which underpins the notification and explain why it has created a concern about the employee’s performance. In straightforward cases where the problem is intelligible to an employee, this step will not create a challenge. A disclosure of the employee’s metrics or customer ratings compared to those of other workers or the average score could easily be automated. Where the algorithmic processes are more complex or the data points are incomprehensible to the employee, this creates a real stumbling block for the procedural fairness of an automated dismissal. If the employee cannot understand the issues that the employer is seeking to discipline them for, it renders the remaining procedural stages meaningless (answering the employer’s case, opportunities to improve etc).
ii. The Need for Meaningful Human Intervention
A key element of establishing that a dismissal was procedurally fair is to demonstrate that an employer conducted the investigations necessary to establish the facts of the case. The tribunals enquire whether the employer had a genuine belief regarding the employee’s misconduct or poor performance and whether they had reasonable grounds for that belief (see DB Schenker Rail (UK) Ltd v Doolan). In automated dismissals, the only ‘grounds’ underpinning the belief of the employer is the data that has been collated about the employee’s work. I would argue that the data should only be the start of the investigation, and that the employer must go further to “look behind” the data and recommendation to dismiss: is the dataset tainted by bias? Could the problems have been caused by problems outside the employee’s control? Particularly where the employer themselves does not control or understand the operation of the algorithm, it must be outside the “range of reasonable responses” to rely solely on a recommendation to dismiss without further investigation (the test as stated in Sainsburys v Hitt). Such an investigation would require human intervention in the process.
Meaningful human interaction or intervention also appears to be necessary in the remaining stages of the disciplinary process. The ACAS Code of Practice states that the employee must have an opportunity to respond to the allegations or concerns, to query the evidence against them and put their case forward. This may involve offering explanations as to why the data on their performance may deviate from the desired norm. For example, Lira – another Amazon Flex driver – sometimes spent an hour queuing for her packages at the depot which would put her behind her delivery plan from the very beginning of the day. The employee’s ability to put forward these kinds of factors would be crucial in achieving procedural fairness. A fair process must always be followed and factors in the employee’s favour be taken into account, even in cases of gross misconduct (see ACAS Code, paragraph 23 and Trusthouse Forte v Adonis [1984] IRLR 382). It seems that automated processes are not well-suited to dealing with such personalised factors. A person must be involved to hear the employee’s concerns or explanation and to gauge whether the algorithmic recommendation should be followed in the specific circumstances presented.
Equating to the right to human review within the data protection regime, the right to an appeal is contained in the ACAS Code of Practice. Here, an employee should be permitted to appeal any formal decision made to an impartial person (such as a different manager within the organisation). Employment Tribunals take this requirement seriously: only in exceptional cases where the appeal would have been futile will a dismissal without a right to appeal be fair. Employers may be able to automate other processes, such as notifications, warnings and giving opportunities for improvement, but this stage in a fair procedure will prevent employers from automating the entire dismissal process.
Unfair dismissal law can thus go further than the combination of section 14 DPA and Article 22 GDPR in regulating automated terminations of employment. It provides a right of human review, in the form of an appeal against a disciplinary decision, as well as a right to be notified of the problem and its supporting evidence, to respond to that evidence, to a reasonable investigation and the requirement to identify and explain a fair reason for the dismissal. Of course, it could always go further. Automated dismissals could be rendered automatically unfair in all circumstances, bolstering the need for human intervention. The ACAS Code of Practice could be enhanced by including recommendations in a report by Patrick Briône, written for ACAS itself. Of relevance to tackling ADM on discipline and termination, the central recommendation is to limit the use of algorithms to offering advice: ‘A human manager should always have final responsibility for any workplace decisions.’ This could be supplemented by other guidance on employers understanding the problems they are seeking to solve and the processes that they are applying and requiring consideration of alternatives to algorithmic management processes before adopting them.
4. The Limitations of Unfair Dismissal Law
Just as the Government observed that Article 22 has a limited scope, unfair dismissal law has severe deficiencies that are thrown into focus by the context in which the most automated dismissal decisions appear to be occurring. A scan of the media reports from different jurisdictions reveal that it is those in casual, on-demand or platform work that are more likely to be subjected to extensive monitoring, algorithmic management and ultimately automated dismissals. Amazon Flex’s ‘contract drivers’, such as Stephen and Lira, and Uber drivers in the UK have experienced something akin to an automated dismissal. The latter have recently been classified as ‘workers’ under English law but both of these groups would face a preliminary battle over their entitlement to claim the right not to be unfairly dismissed.
Only ‘employees’ working under a contract of employment are protected by unfair dismissal law. Of the variety of classifications, this particular legal status is the most difficult to attain: it is restricted by reference to a group of tests that an individual must satisfy in order to claim the relevant right. Casual staff have previously struggled to demonstrate their status as employees, and this would likely also apply to those who work for platforms or on an “on-demand” basis. Even further, in most cases, the right not to be unfairly dismissed only “kicks in” after two years of continuous service with one’s employer – removing new employees from the scope of its protection. New employees may be the most vulnerable to the vagaries of an automated dismissal process. More established employees may get a sense of the algorithmic expectations and adjust to them, an opportunity that new employees do not have if they are dismissed after a brief period of service.
For this blog, Hugh Collins cited an array of anomalies created by the current tests for employment status. In investigating the lawfulness of automated dismissals, it appears that we have found yet another contradiction. Unfair dismissal law may prove to be effective in regulating automated dismissal decisions, but those most likely to be subject to such decisions are also the group most likely to be excluded by restrictive rules of employment status. A move towards a unified (and more inclusive) employment status would tackle this anomaly, but in the meantime the loss of alternative rights such as Article 22 GDPR and section 14 DPA will be felt most keenly by those already marginalised and vulnerable to sharp disciplinary practices.
[1] For the purpose of this post, I will assume that the employer is the data controller as the party that ‘determines the purposes and means of the processing of personal data’ under Article 4(7) GDPR and has responsibility for implementing measures to comply with the GDPR under Article 24(1). The employer may also be the data processor if it collates and analyses the data itself.

Philippa Collins is a Lecturer in Law at the University of Bristol. Her research focuses on employment law, human rights and technology in the workplace. Philippa’s book, Putting Human Rights to Work, will be published by OUP at the end of 2021.
She benefitted from comments from the editors of the blog, as well as discussions with Sandy Gould and Joe Atkinson, in writing this blog.
Suggested citation: P Colins, ‘Automated Dismissal Decisions, Data Protection and The Law of Unfair Dismissal’ UK Labour Law Blog, 19 October 2021, available at https://uklabourlawblog.com