Out-Law Analysis 7 min. read

How financial services providers can prepare for new AI regulation in Europe


Fresh legislation specifically regulating the use of artificial intelligence (AI) by businesses in Europe is expected to be proposed in the coming months. Recent publications by EU policymakers and lawmakers offer clues as to what businesses can expect the new laws to contain

Reform is also likely in the UK. Prime minister Boris Johnson said he believes Brexit provides the government with scope "to originate new frameworks for the sectors in which this country leads the world", citing artificial intelligence as a specific example in this respect.

Financial services providers should prepare for the changes forthcoming. Understanding how existing laws and regulatory requirements relate to AI, and reviewing internal processes and controls, is a good place to start.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

We expect that regulatory expectations on the use of AI by regulated entities will become more clearly defined over the coming months and years  

Timeframes for change

The European Commission is aiming to publish its proposals for a legal framework covering the safety, liability, fundamental rights and data aspects of artificial intelligence (AI), with an indicative timeframe of the first quarter of 2021 outlined for proposals to be tabled. It is likely that those proposals will be heavily scrutinised and debated by the EU's law-making bodies – the European Parliament and Council of Ministers – before the text is agreed, and that a further transitional period will apply before the finalised law takes effect. New laws are therefore unlikely to apply to the use of AI in the EU for at least another couple of years. However, as we explain below, there are existing laws and regulation that businesses need to be aware of.

Clues as to what the new laws will address

Early last year, the Commission published a wide-ranging new digital strategy along with a white paper on AI that explored options for a legislative framework for trustworthy AI, and considered what further action may be needed to address issues of safety, liability, fundamental rights and data.

The Commission said it believes existing legislation in areas such as liability, product safety and cybersecurity "could be improved" to better address risks around the use of AI, and further suggested it is in favour of a new "risk-based approach" to regulation being set at EU level. 

Options for reform were also explored in the European Commission's inception impact assessment (IIA), which looked at a proposal for a new EU legal act laying down requirements for AI. The IIA, published in July 2020, offers different options for regulation:

  • Option 0 (the baseline) – No specific EU legislative requirements, and reliance on existing EU legislation and member state rules.

     

  • Option 1 (EU "soft law") – Focusing on promoting industry initiatives for AI, including ethical codes and industry guidance. This approach may build on existing guidelines and initiatives, and may encourage and establish a trend of monitoring and reporting on voluntary compliance with these initiatives. Self-reporting, encouraging industry-led coordination on a set of AI principles, raising awareness of existing initiatives, and monitoring and encouraging the development of standards are all listed as possible measures under this approach.

     

  • Option 2 (voluntary labelling scheme) – This option would involve the enactment of an EU legislative instrument setting up a voluntary labelling scheme to allow customers to identify whether AI is trustworthy through the meeting of certain requirements. This would be a voluntary scheme, but those participating would be required to comply with certain EU wide requirements in addition to existing EU law in order to display a quality AI label. Use of the label would deem the AI application as trustworthy.

     

  • Option 3 (mandatory requirements for all or certain types of AI) – Requires the enactment of EU legislation which introduces mandatory requirements in relation to particular AI issues such as training data, record-keeping about datasets and algorithms, information to be disclosed, robustness and accuracy and human oversight. The IIA suggests that regulation should look at whether such legislation should be limited to either a specific category of AI only, such as biometric identification systems,, or "high-risk" AI, which could be identified on the basis of set criteria – such as those identified in the Commission's white paper on AI, or legislation covering all AI applications.

     

  • Option 4 (combination) – A combination of any of the options above taking into account the different levels of risk that could be generated by a particular AI application.

The adoption of ethical and legal proposals relating to intellectual property, civil liability and ethics by MEP's in October last year indicate the direction of future legislation. The draft proposals focused on the issues to be addressed including developments relating to the need for human-centric AI controls, transparency, safeguards against bias and discrimination, privacy and data protection, liability and intellectual property. The MEPs' proposals can be viewed as an attempt to shape the proposals the Commission is expected to put forward in the forthcoming weeks.

Developments in the regulation of AI in the UK are also possible. The Centre for Data Ethics and Innovation in the UK has proposed a roadmap in its report on bias in algorithmic decision making for government, regulators and industry which balances increasing fairness and reducing bias with supporting "responsible innovation".

The report includes a number of recommendations to government, including a mandatory transparency requirement on public sector organisations, but also concluded that there may not be a need for a new specialised regulator or primary legislation to address algorithmic bias. The report also suggested that existing regulators need to adapt their enforcement to algorithmic decision making and "provide guidance on how regulated bodies can maintain compliance in an algorithmic age", while also recognising that in comparison to other sectors, the regulatory landscape in the financial services sector is clearer, with the Financial Conduct Authority (FCA) being seen as taking the lead on working to understand the impact and opportunities involved in using data and AI.

The UK AI Council has also set out a roadmap and recommendations to help  the UK government develop a national AI strategy which may influence future UK regulation, while a recent report to the government from the House of Lords Liaison Committee favours the approach of sector specific regulatory guidance ,as well as echoes the Select Committee on AI’s calls for implementation of  "a cross-sector ethical code of conduct, or ‘AI code’, suitable for implementation across public and private sector organisations which are developing or adopting AI".

As regulation of AI will have a significant impact on the financial services sector, regulators will need to ensure that any requirements, mandatory or voluntary, are implemented following consultation with financial services providers and regulators to strike the appropriate balance between regulation of AI use within the sector and development and innovation of the technology. Regulation should also not have a detrimental impact on a provider's ability to provide services effectively, and checks will be needed to ensure that consumer rights are upheld.

What financial services providers can do to prepare for future regulation

While there are clues as to the direction of travel, the detail of what future AI regulation may look like is still to emerge. However, it is clear that AI regulation in some form is coming and so financial services providers should begin to look at what they can do in the meantime to prepare for the new AI landscape.

Regulatory or industry standards, principles and guidance

Regulators and industry bodies have already taken steps to assist financial services providers, whether through existing standards and guidance, or new AI specific guidance. The FCA Principles for Businesses for the financial services sector continue to apply irrespective of whether services are being provided in the traditional sense or using AI. They provide useful guidance to keep firms on track when faced with key AI related issues.

For example, providers must ensure that they are transparent and able to explain AI decision making, as well as monitor usage of AI to ensure fairness to customers and to avoid breaching principles 6 and 7, which respectively refer to customers' interests and communicating with customers.

The FCA and Bank of England have also published a useful report on the use of AI and machine learning in the financial services sector, including details on approaches taken within the sector in relation to performance monitoring of models that are deployed, validation of models to ensure systems are being used as intended, and processes used by firms to mitigate risks – such as human in the loop, back up systems, guardrails, and kill switches.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Financial services providers should assess whether current terms and conditions need to be updated in order to ensure that the terms used are fit for purpose
Existing legislation

While AI specific regulation is currently limited, existing legislation such as those on consumer rights, data protection and competition laws continue to be relevant and apply in an AI context.

The EU General Data Protection Regulation and the UK's Data Protection Act 2018 include requirements in respect of automated processing, such as the requirement to provide individuals with meaningful information about the logic involved, the consequences of the processing and the option to not be subject to automated processing. The legislation also promotes fair and transparent processing.

Rules in relation to unfair contract terms also continue to apply, and AI should not be used for anti-competitive purposes. Existing liability frameworks can also apply where AI gives rise to unintended consequences or where a provider faces claims for breach of contract. Financial services providers should assess whether current terms and conditions need to be updated in order to ensure that the terms used are fit for purpose.

Ethics

It is expected that any future EU regulation will be based on the ethical principles endorsed by the European Commission in recent years. Financial services providers should therefore take into account the Commission's associated guidance.

Internal controls and measures

Reviewing internal processes and measures frequently can assist with identifying gaps and assessing whether current processes remain fit for purpose in relation to AI use. Risk assessments, monitoring of data sets, governance processes and clear complaints and dispute procedures are measures which financial services providers should consider and implement to ensure readiness for AI regulation.

We expect that regulatory expectations on the use of AI by regulated entities will become more clearly defined over the coming months and years as legislation and guidance comes into force both in the UK and EU. Financial services providers therefore need to ensure that they understand the extent to which they are equipped to meet best practice standards already available in the market to address the likely need to comply with more AI-specific regulatory requirements in the future.

Co-written by Priya Jhakra and Luke Scanlon of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.