Out-Law Analysis 5 min. read
02 Mar 2022, 9:33 am
But its introduction requires focus on governance, highlighted recently by the publication in the UK of the Office for Artificial Intelligence’s (OIA) National AI Strategy.
Although there is an increasing body of general guidance from the OIA there is little which explicitly relates to human resources – but HR professionals’ insight should be embedded wherever the use of AI by the business impacts on people.
HR professionals could consider specialising in AI in order to meaningfully advise on how AI affects the workforce in a variety of contexts. AI is a challenging and specialised field, which can be complicated by technical vocabulary.
Specialist HR input is crucial to the success of an AI proposition. Some AI may simply affect the way workers carry out tasks, or it may go further and replace the need for workers altogether. AI may also have a primary HR application, such as monitoring and setting targets to aid performance management; or automating decisions in relation to work allocation, pay decisions, performance management and recruitment processes.
An AI governance team should be established, and HR should definitely be part of that broader hub of expertise. A diverse team can increase the opportunity for internal challenge and help to reduce bias while also avoiding ‘group-think’
HR specialists also need to factor in the potential HR risks from using AI applications. Harms from AI are often unintentional but can include bias and discrimination, unfair treatment, obscured ability to challenge and seek recourse, misuse of employee data and privacy and the possible negative effects to wellbeing as AI applications reduce the need for human interaction.
All of these can result in employment law claims as well as significant reputational risks. Of many examples, the most recent is a trade union-backed claim against Uber alleging that Uber’s facial recognition algorithm failed to recognise darker-skinned workers resulting in a disproportionate rate of terminations.
The challenges of AI governance fall between different disciplines and will include data specialists, commercial managers, procurement management and HR. An AI governance team should be established, and HR should definitely be part of that broader hub of expertise. A diverse team can increase the opportunity for internal challenge and help to reduce bias while also avoiding ‘group-think’.
HR should input into the business’ broader AI ethics policy development, helping to develop a set of values and principles which guide the development and use of AI systems.
This means that the HR team needs to embed its clear proposition of how the business wants to treat its workers into a broader framework of ethical values. Business will already have an HR strategy which is aligned with its broader business strategy, values and purpose.
For example, HR ethical values may include promoting well-being, diversity and inclusion (D&I), fairness at work and environmental initiatives. HR can ensure these do not get lost in complex AI propositions.
Principles underpinning workforce governance should also be embedded in AI strategy. While principles are closely related to values, they relate more to the design and use of an AI system.
They may include accountability, transparency and non-discrimination. These concepts cross over all aspects of AI governance but should be familiar territory for HR professionals who will have encountered them in other areas, such as D&I and data protection. HR should be clear as to how it wants to see proposed AI applications align with these principles.
Once an ethical framework has been established, the implementation of any AI solution needs to be assessed against it. For example, an AI impact assessment scoring tool may be used.
Workforce values and principles should form a significant part of any AI impact assessment. If a proposed AI application does not align with workforce principles or values then the business may question its use, but a re-design of the AI application might resolve concerns.
An impact assessment is currently voluntary. However, the All Party Parliamentary Group on the Future of Work has recommended an Accountability for Algorithms Act (32-page / 1MB PDF) which would legally require algorithmic impact assessments. These would always include an equality impact assessment.
The EU has also proposed a legal framework for AI. Although this would not apply in the UK, global business might be required to meet the standards of the framework. The OIA, in its National AI Strategy, committed to developing its national position on governing and regulating AI which will be set out in a White Paper in early 2022. OIA guidelines for AI procurement in the public sector also recommend an AI impact assessment.
External suppliers are likely to be involved in the design and ongoing supply of AI solutions. HR should position themselves so that they are involved at an early stage with decision-making by commercial teams. This will ensure that third parties understand the business’ AI governance models as they relate to workforce matters.
Data is the foundation of most AI systems, and without quality data the AI systems will not perform well. For example, biased data cannot be expected to produce unbiased automated decisions.
HR professionals should be well versed with workforce data protection.
However, workforce data collated for one purpose may not be suitable for a given AI purpose under data protection laws.
Even when an AI system is implemented after rigorous testing, continuous monitoring throughout its life cycle is needed. HR should ensure that checks are engaged at each stage in the AI life cycle.
For example, has discriminatory or other unfair bias crept in despite initial modelling? Is the system still being used within the parameters of its intended purpose or has misuse crept in?
Measuring AI outputs for HR related risks may well need input from auditing specialists. OIA guidance also suggests engaging ‘hackers’ to seek out and identify discriminatory elements.
‘Explainable’ AI can be used to support decision-making affecting the workforce. Explainable AI means that a human user can understand how an AI system came to a decision. This contrasts with an AI decision being produced by an algorithmic ‘black box’ where it is not clear how the decision was reached.
Although an AI system that is opaque in its decision-making process might be less expensive because it is less sophisticated in its programming, the HR risks of an opaque system are likely to be too significant if decisions adversely impact workers.
Employers should engage worker representatives or trade unions in the development of AI solutions where there is a significant workforce impact. This can add a further level of scrutiny and challenge to the governance process.
Employers should also clearly communicate how AI is being used in relation to the workforce. Under data protection laws transparency is required, so plain English and non-technical explanations should be used.
Decisions affecting the workforce should also have clear human ownership and oversight. This facilitates workers being able to raise queries or challenges. There should be informal routes to a human contact for simple queries and formal routes for grievances and whistleblowing challenges.
Co-written by Gemma Herbertson of Pinsent Masons.
Out-Law Analysis
17 Jan 2022