Out-Law News 4 min. read
31 Aug 2021, 1:40 pm
Fresh guidance on monitoring workers and on using artificial intelligence (AI) tools in recruitment is to be issued to employers in the UK under plans announced by two UK regulators.
The Information Commissioner’s Office (ICO) said it will update its existing employment practices guidance, confirming its intention to provide “a new, more user-friendly online resource with topic-specific areas”. The ICO is seeking views on the precise topics that employers want to see addressed in its new guidance but has said it intends to address the processing of personal data in the context of recruitment, selection and verification, employment records, monitoring at work and workers’ health, as well as data processing in the context of TUPE.
Separately, the Equality and Human Rights Commission (EHRC) has said it will provide guidance on “how the Equality Act applies to the use of new technologies in automated decision-making” and added that it would work with employers “to make sure that using artificial intelligence (AI) in recruitment does not embed biased decision-making in practice”. Those plans were outlined by the EHRC in its draft strategic plan for 2022 to 2025.
Employment law expert Katy Docherty of Pinsent Masons, the law firm behind Out-Law, said: “The Covid-19 pandemic has brought into sharp focus the need for clarity on the processing of workers’ health data, so the ICO’s plans to look specifically at that topic are welcomed. The pandemic has also accelerated the trend of remote working and so further guidance on what employers can do within the parameters of data protection law in implementing employee monitoring technology would also be useful.”
Docherty said that the increased adoption of AI tools for processing data in the employment context raises ethical issues, beyond the sphere of data protection, which employers will increasingly need to consider more carefully. Though the ICO has not yet provided guidance on this topic, they are preparing a public consultation on the overlap between data ethics and data protection. Employers should look out for the result of that consultation, as it, along with the employment practices guidance, should also inform their decisions on what AI technology to use, particularly when processing data in the context of recruitment, selection and verification.
Anne Sammon, also of Pinsent Masons, said that ethical issues arising from the use of AI in the employment setting include questions of compliance with UK equality law. She welcomed the EHRC’s plans to provide some guidance for employers with that focus.
Sammon said there is growing demand for HR departments to be able to glean insights from data about employees to improve the way the organisation operates. She cited a Harvard Business School article from October 2020 that highlighted how the move to a virtual world of work is leading to a larger volume of data being generated about employees and resultant opportunities for employers to use that data to understand and predict behaviours and therefore use technology and data more in the way they manage their people.
However, before implementing AI tools, Sammon said that it is vital employers do some due diligence.
On using AI in the recruitment process specifically, Sammon said: “Employers need to ensure they have sufficient information about how the algorithm works to be comfortable that it does not include any bias that could result in unfair selection or rejection of candidates. This can be challenging given the complexity of this type of algorithm and it is important that the decision maker(s) implementing this have sufficient expertise in the area to properly make informed decisions. Employers should be mindful that should there be any bias within an AI system that results in a suitable candidate being rejected for a reason connected with a protected characteristic, they could face claims that they have breached the Equality Act.”
“It will be important to check that there is no adverse impact on those with a particular protected characteristic and therefore regular analysis of the candidates who are successfully passing the AI stage and those who are not is important. It is also important that where any adverse impact is identified that steps are taken to understand why this is and to address it,” she said.
Sammon said that employers using AI systems to process job applications will need to put processes in place to make reasonable adjustments requested by prospective new joiners who are disabled.
“If a candidate has a disability and contacts the employer seeking for the AI process to be disapplied by way of a reasonable adjustment, the employer should have a process for considering this and reaching a fair but consistent decision,” she said.
Sammon said that it will be important for employers to review the terms and conditions attached to implementing AI tools developed by third parties and to consider seeking indemnities for any equality law breaches that arise as a result of the use of the technology. Employers will also want to ensure they can exercise some control over any changes made to the underlying algorithm to check that it does not import bias into the process.
“One of the concerns about AI is the infiltration of bias and we have already seen historic examples of this occurring in the recruitment context,” Sammon said. “As early as 1988, the Commission for Racial Equality found that St George’s Hospital Medical School had engaged in race and sex discrimination in its admission policy by using a computer program for screening of applicants that discriminated against women and those with non-European sounding names. The flaw in that system was that it had been developed, to a high-degree of accuracy, to match human admissions decisions which themselves clearly contained bias.”
“There have been other high-profile examples of AI systems applying biased algorithms that disadvantage people with particular characteristics or backgrounds. This does not mean that AI is inherently flawed, but any employer seeking to rely on it to make hiring decisions needs to engage with the risks of bias in using such tools and should ensure that it properly understands how the underlying algorithm works to make informed decisions on the use of AI in the recruitment process,” she said.