Policy makers around the world are grappling with how to regulate the use of artificial intelligence (AI). Great care is needed to ensure the regulations protect consumers from harm but are not so tight as to stifle innovation.

The regulation of AI is at an early stage in most jurisdictions, but businesses should prepare for bespoke rules governing use of AI to emerge in the near future. A recent workshop hosted by Pinsent Masons in partnership with Innovate Finance highlighted issues that financial services firms will want to consider when adopting AI systems.

A summary of developments to-date

A global arms race is taking place as some of the world’s biggest economies compete to develop the best technology and make the most effective use of it. With this in mind, countries are taking different approaches to the development and regulation of AI. This is reflected in a new report published by Pinsent Masons, the law firm behind Out-Law.

The report explores how countries are approaching issues such as fairness, explainability, and bias, as well as data protection, liability and customer redress. Attention is focused on regulatory developments in select jurisdictions: France, Germany, Hong Kong, Ireland, Spain, Singapore, the UAE and the UK, as well as the EU more generally.

Scanlon Luke

Luke Scanlon

Head of Fintech Propositions

Perhaps the most significant steps towards regulating AI to-date have been taken by the European Commission. It is on the verge of drawing up a bespoke regulatory framework for the use of 'high-risk' AI, 

As the report highlights in some detail, moves to regulate AI have been tentative to-date. Many countries including Germany, Ireland, Spain and the UAE have developed national AI strategies, while financial regulators such as BaFin, Banco de España and the French Prudential Control Authority have expressed growing interest in enabling innovation in fintech, including those powered by AI, while seeking to protect consumers from harm at the same time.

In many jurisdictions, including Hong Kong and Singapore, regulatory sandboxes provide scope for testing new fintech AI tools in a live but controlled environment. The Hong Kong Monetary Authority has also provided guidance on the development and use of AI in the financial services sector, while in Singapore a model AI governance framework is in place to help organisations design, develop and use AI responsibly.

Following Brexit, all eyes are now on the UK government to see how it approaches the question of regulating AI and how the approach might differ from that pursued by EU policymakers.

A significant recent UK development on AI came from the UK's Information Commissioner's Office (ICO) which developed draft guidelines on a new AI auditing framework. This reflects the fact that data protection compliance is a major consideration when using AI. It is likely that further guidance on AI for businesses in UK financial services will emerge from ongoing work of the Centre for Data Ethics and Innovation on the topic of algorithmic bias and from a joint project the Financial Conduct Authority (FCA) is involved in with the Alan Turing Institute in relation to the explainability of AI.

Perhaps the most significant steps towards regulating AI to-date have been taken by the European Commission. It is on the verge of drawing up a bespoke regulatory framework for the use of 'high-risk' AI, and has suggested further reforms to existing legislation are likely to ensure more effective application and enforcement of existing EU and national legislation in relation to AI. The apparent move towards legislative reform comes after the Commission endorsed new guidelines on the ethical use of AI in 2019.

The changes forthcoming, according to recent publications by the Commission, could include alterations to the legal concept of safety – to ensure it addresses risks such as cyber threats, threats to personal security and those that may result from a loss of connectivity – as well as to the liability framework.

Insight from the workshop

The issue of liability for AI was addressed at our recent workshop by Angus McFadyen of Pinsent Masons. Much of the discussion has been around a potential shift to strict liability for AI in some contexts, but McFadyen queried whether a strict liability regime would work.

He said that while a strict liability regime might make sense from a consumer perspective, in order to enable them to achieve redress quickly where things go wrong, further thought is needed to understand how liability should be addressed in business-to-business contracts governing the supply and use of AI.

On data protection, Kathryn Wynn of Pinsent Masons said one of the big challenges businesses implementing AI systems face is in meeting the transparency obligations under UK data protection law. This, she said, requires businesses to go into a degree of detail about how their systems operate in respect of the processing of personal data.

According to Wynn, a further challenge arises in understanding whether data that appears to be anonymised, and therefore not subject to data protection law, is actually personally identifiable information given the risk of re-identification in the age of the pooling of data and powerful algorithms.

Wynn recommended robust and meaningful data protection impact assessments (DPIAs) as a tool not just for delivering technical compliance under the General Data Protection Regulation (GDPR) – DPIAs are a legal requirement when planning to use AI – but for helping businesses have the confidence to address the data risks that arise and innovate with AI in a compliant way.

Future regulation

As policy makers and regulators look at how best to address growth in the development and use of AI, there is a need for care to be taken.

Particular attention needs to be given to how AI concepts are defined, from what is meant by AI to what is subject to regulation and what is not. Loose wording can have unintentional consequences. On the one hand it can inadvertently tighten regulation in cases where industry needs flexibility to innovate and support to enable economic growth, while on the other it can leave consumers exposed to risk.

Technology is there to help businesses and make manual processes more efficient. This was emphasised at our recent workshop by Martin Goodson, chief executive and chief scientist at Evolution AI and chair of the data science section at the Royal Statistical Society in the UK. He said Evolution AI had saved one financial services institution 100,000 hours in one single project concerning sanctions compliance by using AI solutions.

As specific AI regulation comes closer to reality, it is incumbent on policy makers and regulators to provide an environment that allows businesses in financial services to embrace the technology and use it as a power for good.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.