The law and regulation impacting the use of artificial intelligence (AI) tools in financial services is set to change in 2022.
Anticipated developments, such as with bespoke new EU AI legislation and the fleshing out of the UK’s approach to AI governance and regulation, reflect the desire of policymakers to drive investment in AI and build public confidence in the use of the technology.
For financial services businesses, the regulatory and legal developments we can expect to see in the year ahead will shape the way they implement AI and support the necessary continued focus on building consumer trust in AI by ensuring that the technology is used ethically, transparently and with human needs and rights at the forefront.
Luke Scanlon
Head of Fintech Propositions
Steps can be taken by financial services businesses to prepare for any upcoming regulation by reviewing existing practices and putting in place controls and processes which are specific to the regulation and governance of AI systems, while also maintaining compliance with existing principles and requirements in the financial services sector
In April 2021, the European Commission published its draft regulation on AI, the first of its kind, with an aim to develop a bespoke regulatory framework on AI technology. The proposed regulation, if adopted in its current form, looks to introduce: a strict regime and mandatory requirements for "high-risk" AI, such as AI systems used to evaluate creditworthiness or establish credit scores; limited requirements for specific types of AI, such as chatbots; and a ban on certain uses of AI, such as AI systems that deploy subliminal techniques beyond a person's consciousness.
The regulation will apply to users and providers of AI based in the EU. It will also regulate providers and users of AI systems that are established in a third country, where AI systems are located in the EU or to the extent the output produced by systems in a third country are used in the EU.
The proposed regulation was open for consultation last year. Bodies, such as the European Data Protection Board and European Data Protection Supervisor in a joint opinion, have issued their views on the effect it is likely to have in respect of various sectors – such as financial services.
The AI Act, as the proposed new legislation has been coined, is expected to be finalised and enter into force in the second half of 2022, although a transitional period is expected to apply before the regulation takes effect. During the transitional period, it is likely that more detailed technical standards will be developed, and the governance structures set up by the regulation will come into effect.
Full implementation of the AI Act is unlikely until the second half of 2024 at the earliest.
The UK government published its national AI strategy, which concerns implementation and use of AI in the UK, in autumn 2021. The strategy sets out the government’s proposed timeline for implementing actions and outcomes detailed in the strategy, including the governance and regulation of AI.
The strategy highlights the importance of having in place an effective regulatory and UK governance regime that supports innovation while also building consumer confidence and trust in AI technologies. The UK government has committed to the following actions within six months of publication:
It is likely that we will see activity on the above actions early this year.
The UK government has also highlighted the importance of standards. It is aiming to develop standards that will ensure that “the principles of trustworthy AI are translated into robust technical specifications and processes that are globally recognised and interoperable”.
The strategy also aims to establish a flexible and proportionate AI governance framework which looks at the challenges and opportunities of AI; support the development of AI assurance tools and services to provide meaningful information about AI systems to users and regulators; and help UK regulators with their capabilities to use and assess AI.
According to the strategy, “effective, pro-innovation governance of AI means that (i) the UK has a clear, proportionate and effective framework for regulating AI that supports innovation while addressing actual risks and harms, (ii) UK regulators have the flexibility and capabilities to respond effectively to the challenges of AI, and (iii) organisations can confidently innovate and adopt AI technologies with the right tools and infrastructure to address AI risks and harms”.
Up until now, the UK government has endorsed a sector-led approach to AI regulation. It had agreed, for example, with the House of Lords’ view in 2018 that a general AI-specific regulation would be “inappropriate” and that “existing sector-specific regulators are best placed to consider the impact on their sector of any subsequent regulation which may be needed”.
However, the contents of the national AI strategy indicate that views may be changing, and that the UK government’s position could shift following an assessment and recognition of the challenges posed by AI, technology-specific issues, and possible regulatory concerns. The potential for inconsistent and contradictory approaches across sectors which may create confusing and contradictory requirements is acknowledged, as is the “potential for issues to fall between the gaps” and the risk of cutting across existing regulation such as data protection frameworks.
The first significant sign of whether there will be a change in tack is likely to be detailed by the Office for AI in a white paper it is expected to publish in early 2022. That paper will set out the UK’s national position on the governance and regulation of AI. The paper will also detail the government’s position on the potential risks and harms of AI and the government’s proposal to address these risks.
As well as setting out its position on the regulation of AI, the UK government has also committed to working with regulators such as the FCA to look at whether there is overlap in sector-specific regulatory approaches. Outcomes of this work are likely to be included within the white paper, or follow later in the year.
Regulators, such as the FCA, the Information Commissioner’s Office and the Competition and Markets Authority, are also working on greater co-operation through initiatives such as the Digital Regulation Cooperation Forum in respect of a cohesive approach to digital regulation.
Plans to reform UK data protection law were also published by the UK government in the second half of last year. The Department for Digital, Culture, Media and Sport (DCMS) paper looked at various potential amendments to the UK’s data and privacy laws and included discussion in respect of specific data protection provisions which have an impact on the development and deployment of AI.
Topics addressed included rules around the use and reuse of personal data under the ‘legitimate interests’ test, including for the purposes of bias detection and mitigation; the use of special category personal data for bias detection and mitigation in AI systems; and potential clarifications around “fairness” in a data protection context. The government is also assessing the suitability and operation of Article 22 of the UK GDPR in respect of data subject rights relating to automated decision making and profiling, as well as mandatory transparency requirements for the use of algorithmic decision making in the public sector.
The DCMS paper was open for consultation until 19 November 2021. Pinsent Masons submitted its views on the proposals in response. A summary of all the feedback DCMS received is is expected to be published in the coming weeks, and further direction from DCMS on the reforms the government intends to pursue is also likely this year.
In 2021, the FCA and the Bank of England continued to work together on considering AI in financial services. They did this by holding a series of meetings of the joint AI Public Private Forum (AIPPF). The AIPPF was set up with the purpose of helping the regulators better understand how AI is driving change in financial markets and its impact on "business models, products, services and consumer engagement".
At its last meeting in 2021, the AIPPF discussed the topic of governance and the importance of governance when adopting AI in the UK financial services sector. The AIPPF considered the use of governance structures that support innovation and ethical and safe AI, as well as whether existing frameworks – such as cloud governance frameworks where AI is used on cloud platforms, and data governance frameworks – require to be adapted.
Roles and responsibilities of financial services firms, including lines of accountability, human oversight and compliance with responsibilities, and transparency and communicating with regulators, compliance teams, developers, and consumers, was also discussed. The role of regulators, potential for additional regulatory standards and certified auditing regimes were also noted as key to governing AI use in the sector.
The AIPPF is expected to hold specific workshops in 2022 focusing on governance-related issues and is working towards publishing a final report at the end of its work. The Bank of England and FCA also plan future engagement with the financial services sector more generally following the discussions of the AIPFF, and how to take forward the AIPPF’s final findings and recommendations. However, activities and outputs of the AIPPF are not concrete indications of future policy of the Bank of England or FCA.
While the UK’s national AI strategy indicates a possible change in direction in respect of the UK’s approach to AI, it remains to be seen the extent to which this will be impacted by the European Commission’s proposed AI regulation following Brexit, and the UK government’s proposed data reforms. Any significant changes to the UK’s data protection regime may create challenges with aligning the UK’s position with the regulation. A departure from the EU position may also create other regulatory hurdles for businesses in the UK, a factor the UK government will likely take into consideration when determining the UK’s approach.
While not all elements of the EU AI Act are intended to apply in full to financial services businesses, it includes proposed requirements that would need to be followed, such as in relation to AI used to assess creditworthiness or generate credit scores, and mandatory transparency requirements for certain types of AI. The draft regulation also strongly encourages the adoption of voluntary codes of conduct which reflect the principles and requirements set out in the regulation in respect of “high risk” AI.
Financial services businesses should therefore ensure that they are fully engaged with policy advocacy efforts to promote a practical approach towards business processes and new regulatory requirements. Steps can be taken by financial services businesses to prepare for any upcoming regulation by reviewing existing practices and putting in place controls and processes which are specific to the regulation and governance of AI systems, while also maintaining compliance with existing principles and requirements in the financial services sector.
Co-written by Priya Jhakra of Pinsent Masons.