Image by Gerd Altmann from Pixabay 

Introduction

The recent A-level results fiasco and references to ‘mutant algorithms’ by Boris Johnson have thrust the issue of algorithmic decision-making into the public consciousness, as well as its potential to create unfairness and intensify inequality.

Decisions relating to important aspects of our lives are increasingly made by computerised algorithms; determining the university we attend, the jobs we get, our access to financial and other services, and our treatment by the state. Automation of decision-making is seen by some as a way to increase the speed and efficiency of decisions and overcome human fallibility. At the same time, however, empirical research has consistently demonstrated it can lead to injustice and discrimination (for an accessible introduction see O’Neil, Weapons of Math Destruction (2016)).

The use of algorithms has also hit the headlines in the context of immigration, policing and social security decision-making. But one significant aspect of algorithmic decision-making that has so far received insufficient attention is its growing use by employers to exercise their prerogative and automate managerial functions.

Decisions relating to the hiring, management and dismissal of workers are all now being automated by employers. Many of these practices were pioneered in the ‘gig economy’ but are now being adopted more widely across other sectors of the labour market. This trend is likely to have been accelerated by Covid-19, and the move to working from home, as employers adopt new technologies to help them monitor and exercise control over their workforces. Tech companies are now rushing to develop and sell tools to facilitate this move towards ‘algorithmic’ or ‘automated’ management.

This use of technology to automate management decisions raises critically important questions for employment lawyers. It adds a further dimension to the diffusion of employers’ responsibility for workers, already familiar to labour lawyers as a result of the ‘fissuring’ or ‘vertical disintegration’ of the workplace, and raises the question of whether and how employers can be held responsible for decision-making processes over which they have limited control or understanding. It is also vital that we understand the various ways these technologies may be harmful for workers or can threaten their rights (for example, if they infringe their privacy, or are used in union busting efforts).

Employment lawyers must respond to the rise of automated management practices by considering the extent to which current legal protections guard against the unfairness and injustices caused by algorithmic decision-making, and ask what new regulatory frameworks are needed, if any.

This article makes a small contribution to this broader agenda by considering employers’ liability for ‘digital discrimination’ under the Equality Act 2010 (EqA). The analysis is limited to direct and indirect discrimination, but there are other forms of liability and important questions that deserve attention in future. While there is yet to be case law directly addressing the issue, it is argued that existing anti-discrimination law is to a large extent capable of capturing digital discrimination at work, but that it may often be difficult in practice for claimants to bring successful claims.

First, however, it briefly introduces and explains the connected concepts of algorithmic decision-making, automated management, and digital discrimination.

Algorithms, automated management and digital discrimination

An algorithm is a series of steps or processes applied to achieve a certain goal, and ‘algorithmic decision-making’ is simply the application of an algorithm to make some decision. Algorithms may be applied by either humans or computers, but the term ‘algorithmic decision-making’ generally refers to computational decision-making systems. Algorithms can consist of pre-determined and programmed steps, but algorithmic decision-making often now involves advanced ‘machine learning’ techniques, which operate with minimal human supervision or instruction, and develop predictive models using patterns they identify in existing ‘training data’.  

Employers are increasingly using algorithms to automate workplace decision-making, a practice sometimes known as ‘algorithmic management’ but labelled here as ‘automated management’.

At present the most common use of automated management by employers is in the recruitment process, for instance to identify strong applicants (£), analyse video interviews (£), or screen social media profiles. Increasingly however, algorithms are being used to automate other managerial functions such as scheduling and allocating work, monitoring employees and evaluating performance, setting remuneration levels, selecting employees for promotion or other opportunities (£), and triggering dismissal or disciplinary procedures.

Employers see automated management technologies as a way of increasing the speed or quality of decisions, but algorithmic decision-making can also be discriminatory.

There are several potential causes of this ‘digital discrimination’. It may result from the design process, if the biased assumptions and choices of the algorithms’ developers become reflected in the model. At its most direct, an algorithm may rely on personal characteristics such as race, religion or gender as part of its decision-making process. But even where such characteristics do not feature directly, algorithmic decisions could be based on combinations of other factors that amount to close proxies – such as post code and educational history acting as a proxy for race.

More subtly, algorithmic models may discriminate if they ‘learn’ from, or are developed using, data that contains bias or historical discrimination. Such algorithms are likely to reproduce, and potentially amplify, inequalities and discrimination present in the training data. For instance, an algorithm that is trained to identify potential high-performing employees using data about a company’s existing senior management team, which is overwhelmingly white and male, is likely to end up favouring individuals from these groups. It was this category of bias that forced Amazon to scrap its plans to automate recruitment; the lack of lack of women working in the tech industry meant that their hiring algorithm ‘learnt’ to favour men.

One frequently discussed issue with algorithmic decision-making is the lack of transparency about how decisions are being made. The complexity of some algorithms means that even those developing them may be unable to explain the process by which a particular decision has been reached. This ‘black box’ problem may initially appear to make it difficult to hold employers accountable for algorithmic management decisions, as the reasons underlying a decision are hidden from scrutiny. But, as we shall see, this is not necessarily a barrier to liability for digital discrimination under the Equality Act 2010.

Liability for direct digital discrimination

Direct discrimination occurs where a person is treated less favourably because of a protected characteristic (EqA, s.13). The protected characteristics are, age; disability; gender reassignment; marriage and civil partnership; pregnancy and maternity; race; religion or belief; sex; sexual orientation (EqA, s.4). Direct discrimination is prohibited at recruitment, while the relationship is ongoing, and in dismissal (EqA, s.39). Except in cases of age discrimination, and some limited statutory exceptions, it cannot be justified by employers.

Employers will therefore be liable for direct discrimination if they adopt automated management technologies that make decisions using algorithms which rely on data relating to protected characteristics. Employees who are treated less favourably by an algorithmic decision-making tool that relies on a protected characteristic will have a claim for direct discrimination. Such situations are analogous to Test-Achats (Case C 236/09), in which the Court of Justice of the European Union found it was unlawful discrimination for companies to use sex as part of their calculations for determining the price of insurance.

Cases where protected characteristics feature expressly in algorithmic decision-making will (hopefully) be rare. However, there will also be liability for direct discrimination where an algorithm is programmed to ignore protected characteristics but in practice fails to do so, because it relies on other data points that act as proxies with ‘exact correspondence’ to a protected characteristic (R (Coll) v Secretary of State for Justice [2017] UKSC 40, as discussed here). An example of this is James v Eastleigh Borough Council [1990] 2 AC 75, where the claimant’s treatment was due to being under retirement age but was nevertheless found to be directly discriminatory because this acted as a proxy for the protected characteristic of sex as a result of different pension ages for men and women.

Employees who are treated less favourably by an algorithmic decision-making tool that relies on a protected characteristic, or a precise proxy, will therefore have a claim for direct discrimination.

The threshold for detrimental treatment that can ground direct discrimination claims is whether a reasonable employee would take the view that they have been disadvantaged (Shamoon v Chief Constable of the Royal Ulster Constabulary [2003] UKHL 11). A broad range of applications of automated management is therefore captured, rather than only those leading to economic loss. In addition, and crucially for victims of direct digital discrimination, there is no need for the discrimination to be intentional or malicious. As set out in R (E) v Governing Body of JFS [2009] UKSC 15, the question is whether a protected characteristic is the reason for the decision, and the guidance in Igen Ltd v Wong [2005] IRLR 258 makes clear the less favourable treatment must ‘in no sense whatsoever’ be based on the protected characteristic.

Despite this seemingly strong prohibition, claimants will often struggle to prove that automated management technologies are directly discriminatory, as they will generally lack the access to the algorithm’s inner reasoning needed to demonstrate that less favourable treatment was ‘because of’ a protected characteristic. While undoubtedly a significant obstacle, this need not be an insurmountable barrier to liability. Much will depend on the operation of the burden of proof.

In recognition of the difficulty claimants face in proving discrimination, section 136 of the Equality Act 2010 requires that courts find discrimination wherever there are ‘facts from which [they] could decide, in the absence of any other explanation,’ it has occurred. A two-stage approach to s.136 was confirmed in Royal Mail Group Ltd v Efobi [2019] EWCA Civ 18, whereby claimants must initially demonstrate facts from which discrimination can be inferred, and the burden then shifts to the respondent to demonstrate the treatment was on non-discriminatory grounds.

The threshold for claimants to satisfy the initial burden of proof, i.e. the facts from which courts are willing to infer discrimination, will be key in digital discrimination cases.

It is not usually enough for the claimant to show they have been treated less favourably than another person who does not share the protected characteristic. There must be some facts from which the court can infer that discrimination has occurred, absent some other explanation, rather than merely suggesting it might possibly have (Madarassy v Nomura International plc [2007] ICR 867). In cases involving alleged discrimination by automated management technologies, however, the empirical evidence of widespread bias by algorithmic decision-making systems more generallymay provide the court with sufficient facts from which to infer discrimination in the specific case before it. Certainly, if the technology or process used by an employer has already been proven to be discriminatory in another context then it seems that this should be sufficient to meet the burden of proof. The availability of evidence of this kind will often be crucial for allowing successful claims to be brought.

Alternately, claimants may be able to prove discrimination by seeking disclosure during litigation of information about the algorithm’s outputs, training data, or internal reasoning processes. Discrimination can be inferred from a historical pattern of individuals with a protected characteristic being treated less favourably, as in Rihal v London Borough of Ealing 2004 IRLR 642, so the burden of proof would likely be met if the algorithm’s output data revealed a pattern of this kind. Similarly, if the algorithm’s training data is shown to be biased this should be enough to infer discrimination, because the bias will likely be replicated in the resulting model. In addition, refusal by employers to be transparent about the algorithm’s output data or reasoning process may also lead to an inference of discrimination (see Danfoss Case C-109/88; Meister Case C-415/10).

Once a prima facie case of direct digital discrimination has been demonstrated, the burden shifts to employers to prove that treatment was not ‘because of’ a protected characteristic. At this stage the ‘black-box’ nature of automated management could be problematic for employers, as the complexity and opacity of algorithmic decision-making may make it difficult to show that automated management technologies do not rely on protected characteristics or close proxies.

Indirect digital discrimination

Indirect discrimination occurs where a seemingly neutral ‘provision, criterion or practice’ (PCP) is applied, which in fact puts a group sharing a protected characteristic at a ‘particular disadvantage’. Employers who apply indirectly discriminatory PCPs at any stage of the employment relationship will be liable to members of the disadvantaged group who suffer the disadvantage, unless the PCP can be justified as a proportionate means of pursuing a legitimate aim (EqA s.19).

Automated management will more commonly lead to indirect discrimination than direct, because it is easier to ensure algorithms ignore protected characteristics than to prevent them disadvantaging protected groups. It is therefore particularly important that the law protects against this form of digital discrimination.

An inclusive approach is taken to defining PCPs, as recently demonstrated in United First Partners Research v Carreras [2018] EWCA Civ 323, meaning an employers’ use of automated decision-making technologies will undoubtedly be a ‘practice’ for the purposes of indirect discrimination. Any use of data by employers to train the algorithm, along with its internal reasoning processes, will also likely count as PCPs. Employers will therefore be liable if automated management tools put a protected group at a particular disadvantage and their use cannot be justified as a proportionate means of pursuing a legitimate aim.

To prove a ‘particular disadvantage’, claimants must show a disparity of impact between the group sharing a protected characteristic and the general population the PCP is applied to. Automated management may create this disadvantage in a number of ways. For example, if members of the protected group are overrepresented in the class of people detrimentally impacted by automated decision-making, if they are statistically less likely to benefit from decisions, or are subject to a higher rate of errors. As with direct discrimination, claimants can seek disclosure of information about the algorithm’s decision-making during litigation in order to demonstrate a discriminatory impact, and refusal by employers may lead a to an inference of discriminatory impact.

Importantly however, claimants need not demonstrate why a PCP creates a particular disadvantage (Essop v Home Office [2017] UKSC 27); allowing them to sidestep the potentially problematic ‘black box’ nature of algorithmic decision-making.

Once an employer’s use of automated management is proven to have a discriminatory impact, the burden will shift to them to justify the practice. There are three stages of the justification test: the use of automated management must pursue a legitimate aim; it must be capable of achieving that aim; and it must be (reasonably) necessary and proportionate (for discussion see Homer v Chief Constable of West Yorkshire [2012] UKSC 15).

Employers will invariably pass the first stage because they will be able to point to a real business reason for adopting the automated management practice. The use of automated recruitment technologies, for example, will pursue the legitimate goal of hiring staff, and employers will usually be able to argue that automated technologies pursue the legitimate goal of increasing the speed or efficiency of decision-making. They will also often be able to pass the second stage, providing they can show the technology is functioning effectively and accurately in achieving its stated goal. However, the requirement of necessity and proportionality may prove more difficult.

At this final stage, the discriminatory impact of the PCP is balanced against the employers need to achieve their aim; the greater the harm and number of employees affected, the more difficult it will be to justify. The justification of indirectly discriminatory automated management will therefore frequently turn on tribunals’ assessment of necessity and proportionality on the facts of each case. However, it is possible to make some comments on the approach that should be taken.

Significantly, if the employer’s aim could be achieved by a less discriminatory means this can lead to a finding the PCP is not justified, because it will go beyond what is reasonably necessary (Homer v Chief Constable of West Yorkshire). Following this, it is suggested that if an algorithm could be adapted to operate in a non-discriminatory or less discriminatory manner, for instance by using non-biased training data or altering its internal reasoning, it should not be found proportionate.

In addition, the nature of algorithmic decision-making means that courts should scrutinise the use of indirectly discriminatory automated management practices particularly closely. The potential for these technologies to be adopted widely by employers and applied to large populations of workers, their well-documented tendency to embed and perpetuate existing forms of bias and discrimination, and the opaque reasoning and decision-making processes that they apply, should all make courts wary of sanctioning the use of discriminatory automated management.

Remedying digital discrimination

Finally, it is worth briefly commenting on the remedies available for digital discrimination under the Equality Act 2010. Once a court finds an employer’s use of automated management technologies amounts to unlawful discrimination the normal range of remedies will be available, namely, declarations of rights, compensation, and recommendations (EqA s.124).

One question that will need to be addressed is the application of the ‘Vento bands’ of compensation for injury to feelings, established in Vento v Chief Constable of West Yorkshire Police [2002] EWCA Civ 1871, to instances of digital discrimination. On this point, it may be that the institutionalised nature of automated management, and its wide scope of application across the workforce, are factors which help elevate cases of digital discrimination to the upper bands of compensation reserved for more serious injury to feelings.

Another interesting issue will be the extent to which tribunal recommendations can be used to remedy discriminatory automated management. At their most direct recommendations could be used to prevent the use of certain automated management technologies, but employers could also be required to reprogram the algorithmic decision-making system using unbiased data, or to modify it in some other way. Given that recommendations have been used to require that managers undertake equality training courses, as in London Borough of Southwark v Ayton [2003] UKEAT 0515_03_1809, it seems by analogy that it should be possible to recommend an algorithmic decision-making system undergo retraining or modification. This means of regulating automated management technologies appears promising, but is severely limited, however, by the removal of the general power to make recommendations by the 2015 Deregulation Act. As a result, recommendations can only be given where they benefit the individual claimant rather than other employees.

Conclusion

The increasing use of automated management and algorithmic decision-making in the workplace is only likely to continue, and there is an urgent need for employment lawyers to address and respond to the challenges posed by these technologies.

While automated management may have the potential to improve the efficiency and quality of decision-making, it also comes with serious risks of discrimination. Although necessarily somewhat speculative, and many issues could not be fully explored, the above analysis indicates that the Equality Act 2010 has the potential to protect against these emerging forms of discrimination.

The ability of anti-discrimination law to capture instances of digital discrimination remains to be seen, however, and will largely depend on the courts’ approach to the issues of burden of proof and justification. The effectiveness of protection from digital discrimination provided by the Equality Act will also be hampered by various shortcomings with the legislation familiar to employment lawyers, including its narrow personal scope after Jivraj v Hashwani [2011] UKSC 40 and Secretary of State for Justice v Windle [2016] EWCA 459, and the limitations of individual litigation as a means of enforcement.


About the author: Dr Joe Atkinson is a lecturer in Law at the University of Sheffield, where he researches and teaches on labour law and human rights. He is an Associate Fellow of the Sheffield Political Economy Research Institute, and member of the Sheffield Institute of Corporate and Commercial Law. 

(Suggested citation: J Atkinson, ‘Automated management and liability for digital discrimination under the Equality Act 2010’, UK Labour Law Blog, 10 September 2020, available at https://uklabourlawblog.com)

An earlier version of this blog is published in W. Green Employment Law Bulletin, Issue 159, October 2020.