Facial Recognition Technology: the answer to social distancing or discriminatory?
Employers are making difficult choices at this time in situations which have never affected their workplaces before. Employment lawyers are having to advise in a context where the landscape is changing day by day. As fresh guidance is issued and new headlines emerge, the next legal queries evolve. This blog by Robin Allen QC and Dee Masters (who maintain www.ai-lawhub.com) is the fourth in a series which examines the interplay between the workplace and the corona virus. Robin and Dee look at how Facial Recognition Technology (FRT) can be unlawfully discriminatory contrary to the Equality Act 2010 and what practical steps organisations should be taking to avoid disruptive and costly litigation.
TheGovernment’s instruction towork from home and to limit travel is bound to mean that reliance on artificialintelligence (AI), machine learning (ML) and automated decision making (ADM)will expand even faster than before, and it has been expanding very fastalready. Certainly, The Wall Street Journal is predictingthat despite the economic downturn, investment in AI will continue. Indeed,The New York Times has just reportedthat the number of users on Teams has grown 37 percent in a week with at least 900 million meeting and call minutes within theplatform every day.
Whilethese technologies can undoubtedly benefit everyone they also have thepotential to cause unlawful discrimination, breaches of human rights and GDPR law.
Inwhat ways are technology being used in the workspace?
Todate there has been no comprehensive review of the ways in which employers aredeploying AI. In the UK, one of the mostrecent accounts of the way in which AI is being used in the workplace is apaper commissioned by ACAS entitled “My boss the algorithm: an ethical look atalgorithms in the workplace”. This report and other online sources[i]reveal that AI is being used in at least the following ways by employers -
Assessment of candidates for rolesincluding through automated video analysis and assessment of an individual’ssocial media presence
Robot interviews
CV screening
Background checking
Reference checking
Task allocation
Performance assessments
Personality testing
Whatis FRT?
Onearea where a real understanding of the law is essential is in the use of FRT inthe workspace. FRT is one form of AI technology,used for biometric identification, that analyses your face by measuring and recordingthe relationship of different points such as your eyes, ears, nose and mouth. It then compares that data with a known datasetand draws conclusions from that comparison. FRT is often used without any human involvement as a gatekeeper, forinstance when you pass through the gates that read your passport on entry intothe UK at an airport. In that context itis a form of ADM in that a machine takes a decision about you. It can also be used to supplement humandecision making as we discuss below.
Howis FRT used in the workspace?
Youcan see one example of how this kind of technology is being deployed in theworkspace in the website of a company called Causeway Donseed which promotes a UK developed product on thefollowing basis –
…a Non-Contact Facial Recognition technology [that] provides a fast, highly accurate biometric solution way of identifying construction workers as they enter and leave your construction sites. Users simply enter a pin / present credential card and stand briefly in front of the unit whilst the unique, IR facial recognition algorithm quickly verifies the user’s identity and logs a clocking event in the cloud Causeway Donseed platform.
Thereare many other ways in which FRT is being used in the workspace. For instance, Causeway Donseed’s website alsoshows how its systems can be integrated more widely into a client’s businesswith the generic claim that it is “Your Reliable Facial Recognition Data in theCloud” providing the “… biometric labour management solution … configurable tosuit your needs.” It lists applicationsof FRT from labour tracking to online inductions.
Howdo I know that these forms of technology are lawful and do not discriminate?
Weare clear that some of these new systems could potentially have huge benefits. For example, the ACAS report identified that Unilevercut its average recruitment time by 75% by using AI to screen CVs andalgorithmically assessing online tests to create shortlists for human reviews. At a time when social distancing is soimportant, gatekeeping without contact could be very useful.
Butwe need to take care; utility is not the test for lawfulness.
TheEquality Act 2010 provides no exceptions to the rules thatoutlaw discrimination merely because a system is clever or has some practicalbenefits. For instance, if an FRT gate-keeping systemdid not allow a black female of Afro-Caribbean ethnicity through the gates asquickly as a white Caucasian male, or if it could not work at all with adisabled person, it should be obvious that it would be unlawfullydiscriminatory.[ii]
Alook at HireVue
Inthis blog, we will focus on the use of FRT within recruitment processes todemonstrate how AI might be unlawful.[iii] One of the most talked about users of suchtechnology is HireVue a US based company that has also launched in Europe. Its website explains how it can help business, sayingthat –
With HireVue, recruiters and hiring managersmake better hiring decisions, faster. HireVue customers decrease time to fillup to 90%, increase quality of hire up to 88%, and increase new hire diversityup to 55%.
HireVue is the market leader in videointerviewing and AI-driven video and game-based assessments. HireVue isavailable in over 30 languages and has hosted more than 10 million on-demandinterviews and 1 million assessments.
HireVue’suse of FRT to determine who would be an “ideal” employee has been heavilycriticised as discriminating against disabled individuals. Scholarsat New York’s AI Now Institute wrote in November 2019 –
The example of the AI company HireVue isinstructive. The company sells AI video-interviewing systems to large firms, marketingthese systems as capable of determining which job candidates will be successfulworkers, and which won’t, based on a remote video interview. HireVue uses AI to analyze these videos,examining speech patterns, tone of voice, facial movements, and otherindicators. Based on these factors, incombination with other assessments, the system makes recommendations about howshould be scheduled for a follow-up interview, and who should not get thejob. In a report examining HireVue andsimilar tools, authors Jim Fruchterman and Joan Mellea are blunt about the wayin which HireVue centers on non-disabled people as the “norm,” and theimplications for disabled people: “[HireVue’s] method massively discriminatesagainst many people with disabilities that significantly affect facial expressionand voice: disabilities such as deafness, blindness, speech disorders, and survivinga stroke.
Howdoes the Equality Act 2010 apply to discriminatory AI?
Herewe will discuss shortly how a disability indirect discrimination claim[iv] arisingfrom an AI powered video interview would proceed as per the HireVue exampleabove –
The algorithm and/ or the data set at the heart of the AI and ML system would be a PCP unders.19(1) of the Equality Act 2010.
It would beuncontroversial that the PCP was applied neutrally under s.19(2)(a) of the EqualityAct 2010.
Prospectiveemployees would then be obliged to show particular disadvantage as pers.19(2)(b) of the Equality Act 2010. Whilst the “black box” problem is very real in the field of AI[v],there are numerous ways that disadvantage might be established by a claimant -
Relevantinformation might be provided to unsuccessful applicants in the recruitmentprocess itself which suggests or imply disadvantage.
Organisationsmight be obliged to provide information (e.g. if they are in the public sector)or choose to do so voluntarily. Examplesof how a claim can be formulated on the basis of publicly available informationare set out in our open opinionfor the TLEF.
Academic researchby institutions like AI Now Institute might help show disadvantage.
Litigation inother jurisdictions which focused on the same technology might lead toinformation being in the public domain.
The GDPRmight allow individuals to find out information.
An organisation,perhaps with the benefit of crowdfunding or the backing of a litigation funder,might dedicate money and resources to identifying discrimination. This happened in the US when journalists at ProPublicaanalysed 7,000 “risk scores” to identify that a machine learning tool deployedin some states was nearly twice as likely to falsely predict that blackdefendants would be criminals in the future in comparison to white defendants.Indeed, in this way, AI discrimination claims might become the “new equal pay”litigation with large numbers of claimants pooling information so as to show apatten of possible discrimination.
The prospectiveemployer would then be obliged to justify the use of the AI technology.
Employersmight well be able to identify relevant legitimate aims (for example, the needto recruit a suitable candidate on a remote basis), however we think that many organisationswould struggle in relation to the rest of the justification test, as there is muchevidence suggesting that FRT does not accurately identify the best candidates.
Wehave already mentioned the research from New York’s AI Now Institute, and this alsostates that -
… it isimportant to note that a meaningful connection between any person’s facialfeatures, tone of voice, and speech patterns, on one hand, and their competenceas a worker, on the other, is not backed by scientific evidence – nor is thereevidence supporting the use of automation to meaningfully link a person’sfacial expression to a particular emotional state …
Ifthis analysis is correct, the employer would not be able to show that the FRTachieved the aim of identifying a suitable candidate and the justificationdefence would fail.
Inany event, it might not be proportionate to use FRT if there are other means ofidentifying candidates remotely which are less discriminatory (for example,human interviews where suitable disability training has been offered).
Itis not that FRT can never have relevance to the workspace, our concern is thatthere are many real dangers in pushing its utility too far in the absence of athorough legal review.
Arethere analogies with non-AI recruitment processes?
Intruth, there is really very little difference between the problems that arisefrom FRT recruitment technology and mechanical tests for recruitment wheremarking can be done by a computer without human intervention or judgment. Indeed, multiple-choice tests for recruitmentrun by computers have already been held to be indirectly discriminatory and acause of disability discrimination by the EAT in the Government Legal Service v. Brookes. Inthat case Ms Brookes, who had Asperger’s Syndrome, complained that the multiplechoice the recruitment process being used did not allow her to give writtenanswers and this claim was upheld
Whois taking notice?
Itis not surprising that legislators across the globe are becoming increasinglyconcerned by the use of FRT. For example–
The State ofIllinois has passed an Artificial Intelligence Video Interview Act which places limitations on use of FRT in therecruitment process.
The leaked draft document from the European Commission mooted thepossibility of banning FRT for a fixed period whilst the efficacy of thetechnology is explored.
Back in inOctober 2019A private members bill was proposed in the House of Lords entitled“Automated Facial Recognition Technology(Moratorium and Review) Bill”.
Whilstwe may yet see additional legislation in the UK, in our view FRT thatdiscriminates against prospective employees on protected grounds such asdisability, is already in contravention of the Equality Act 2010.
Actions to take now
Thereare many practical steps that organisations can take to minimise the risk thattheir AI systems contravene the Equality Act 2010, and these could be part of alarger discussion. These include –
Auditing systemsto check for all forms of discrimination.
Identifyinglegitimate aims now.
Considering proportionalitynow.
Creating internalAI ethics principles and review mechanisms.
Interrogating thethird-party suppliers of technology.
Considering“future proofing” technology looking at ways in which technology is likely tobe regulate going forward.
Ensuringcompliance with the GDPR especially Article 22 which prohibits discriminatory data processing incertain circumstances.
Wehave experience of advising in relation to these and are happy to discuss whatcan be done in more detail.
Conclusion
Wehave been predicting for some time that AI will be the next frontier indiscrimination law. The home-working weare all now practicing may bring these issues to the fore even more quickly. We have to remember that Equality Act 2010claims are a significant part of the Employment Tribunal’s caseload and canattract hefty compensation especially where an individual has sufferedfinancial loss. In those circumstancesit seems very likely that AI based discrimination claims in relation to badlydesigned or inappropriately used FRT systems will occur against both existingand prospective employers.
So,we advise businesses to think carefully about the use of these systems. The risk is that they rush into purchasingand using them and then find that they are unlawful. The time for both employers and employeerepresentatives to be thinking about these issues is now while at least thereis a “pause” in business activity and a possibility for deeper reflection.
Andanother point…
FRTis not only being used across the private and public sector[vi] inthe employment sphere but also in relation to the provision of goods,facilities and services (GFS). Historically,there have been comparatively few discrimination claims in the GFS field. We think that this could change as this newtech is developed and used in different areas. We have also set out in somedetail on ourwebsite how we believe that the Equality Act2010 can be used to challenge discriminatory technology in various differentscenarios outside of the employment field.
Moreinformation
Ablog can only say so much. Much moreinformation about AI, equality law and also data protection and human rightslaw is available on our website www.ai-lawhub.com dedicated to explaining how AI in the widestsense can be used lawfully and appropriately. If this is of interest to youcheck it out and maybe get in touch. Wewould be delighted to talk through these points as they apply to you.