Out-Law Analysis 8 min. read
28 Apr 2022, 1:35 pm
Meaningful human oversight of the way artificial intelligence (AI) systems operate is considered essential by experts in the technology and is increasingly being demanded by policymakers and regulators across Europe.
A recent report by MEPs provides businesses with a fresh perspective on the growing expectations around human oversight of AI and adds to the existing resources businesses – in particular financial services firms – have available to help them determine what practical steps they need to take to deliver effective meaningful oversight that safeguards against consumer harm and corresponds to emerging law and regulation.
Many businesses are increasingly relying on artificial intelligence (AI) systems to carry out traditional human functions. As use increases and the technology continues to develop, businesses should consider the extent to which they rely solely on this technology and the extent to which they allow AI systems to run autonomously.
Many such businesses already operate against a heavy regulatory backdrop, such as those operating in financial services where there are particularly stringent requirements in relation to customer facing operations. New EU laws on AI are set to introduce further requirements in respect of ‘high risk’ AI systems and specific AI use cases, such as credit checking. In the UK, the development of “additional cross-sector principles or rules, specific to AI” is also under consideration. The Office for AI is developing a “pro-innovation national position” on governing and regulating AI and this is expected to be articulated in a white paper “in early 2022”, according to the UK’s national AI strategy published last autumn.
In its report on trustworthy AI, the EU High Level Expert Working Group (EU HLEG) said that “any allocation of functions between humans and AI systems should follow human-centric design principles and leave meaningful opportunity for human choice”, which in turn requires implementing human oversight and controls over AI systems and processes. The concepts of human centricity and oversight were carried over into the European Commission’s draft AI regulation (EU AI Act).
In respect of high risk AI systems, the draft EU AI Act provides that AI should “be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use”.
According to the EU HLEG, there are various ways and differing levels of oversight which can be used. These include:
The level of oversight required will also depend on factors such as what the system is being used for and the safety, control, and security measures in place. The less oversight a human can exercise over an AI system, the more testing and governance will be required to ensure that the system is producing accurate and reliable outputs.
High levels of human involvement may not be possible, desirable or cost effective in practice. This appears to be recognised by policymakers and regulators in both the EU and UK – the European Commission, UK Information Commissioner’s Office (ICO), and the AI Public Private Forum (AIPPF) set up by the Financial Conduct Authority and the Bank of England all agree that the level of human oversight used must be “appropriate”.
Having the right people involved and at the right stage of the AI lifecycle can help with ensuring any human oversight or intervention is an effective safeguard.
The AIPPF in a recent report said that there is “a need to increase data skills across different business areas and teams” and that “board members and senior managers are not always aware of, or do not fully appreciate, the importance of issues like data quality”. It added that “there is a need to increase understanding and awareness at all levels of how critical data and issues like data quality are to the overall governance of AI in financial services firms”.
Similar views are shared by the ICO. It has said that organisations should ensure they decide upfront who will be responsible for reviewing AI systems and that AI developers understand the skills, experience and ability of human reviewers when designing AI systems. The ICO explains that organisations should “ensure human reviewers are adequately trained to interpret and challenge outputs” from the AI system, and “human reviewers should have meaningful influence on the decision, including the authority and competence to go against the recommendation”.
The ICO further explains in its guidance on AI and data protection that “the degree and quality of human review and intervention before a final decision is made about an individual are key factors” in relation to solely automated decision making. Human reviewers must be involved in checking an AI system’s decision/output and should not automatically apply the decision of the system; the review must be meaningful, active and should not simply be a “token gesture” – it should include having the ability to override a system’s decision; and reviewers “must ‘weigh up’ and ‘interpret’ the recommendation, consider all input data, and also take into account other additional factors”.
Responsibility for meaningful human input around solely automated decision making lies throughout an organisation and not only with the individual using the AI system, according to the ICO. Senior leaders, data scientists, business owners, and those with oversight functions are cited as being “expected to play an active role in ensuring that AI applications are designed, built and used as intended”.
Both the ICO and EU HLEG have articulated steps that businesses can take to ensure they apply meaningful human oversight of AI systems in practice. A recent report by two European Parliament committees, which suggests amendments to the draft EU AI Act, suggests some specific requirements in this regard will soon be stipulated in EU law.
The ICO notes that training of staff is important in controlling the level of automation of a system. It recommends that organisations train or retrain human reviewers to:
Training is also endorsed in the MEPs’ report, which suggests stipulating in EU law that businesses using ‘high risk’ AI ensure that people responsible for human oversight of those systems “are competent, properly qualified and trained and have the necessary resources in order to ensure the effective supervision of the system”. They also suggest that the law also require providers of ‘high risk’ AI systems “ensure that natural persons to whom human oversight of high-risk AI systems is assigned are specifically made aware and remain aware of the risk of automation bias”.
These requirements would complement Article 14 of the European Commission’s draft EU AI Act, which already lists proposed requirements on those tasked with providing human oversight. “As appropriate to the circumstances”, the Commission has said those individuals should:
The training of individuals will be a pre-requisite to ensuring those individuals can fulfil those expectations and any others that are to be added as the EU AI Act continues to be scrutinised.
Keeping records of human input and review of decisions made by AI systems can be useful in assisting businesses with assessing and managing risk arising from AI use. Noting how often human reviewers agree or disagree with AI decision making can also help with determining a system’s accuracy and the quality and efficiency of the systems. This is helpful particularly where AI systems are used in customer facing environments.
The EU HLEG guidelines set out a number of considerations to help organisations manage their human review and oversight processes, providing a form of checklist that businesses can reference themselves against. The guidelines ask:
Businesses should ensure that governance processes for AI include adequate and appropriate human review measures. Data protection rules in relation to solely automated decision making where personal data is processed must also be considered and measures implemented to control the level of human input to meet requirements under the data protection laws.
Any human oversight must be meaningful and businesses should ensure that those reviewing AI decision making are suitably trained and skilled to do so, as well as being empowered to override AI decision making where necessary.
Co-written by Priya Jhakra of Pinsent Masons.