Out-Law Analysis 7 min. read
08 Jul 2020, 3:24 pm
Addressing these considerations will help assist financial services businesses to effectively manage the well recognised risks of unfair bias and discrimination and also assess the level of contractual protection they should seek when engaging with AI suppliers.
There are due diligence steps firms can take, and contractual options open to them.
AI technologies are currently used across both front and back office operations, some of these include monitoring user behaviours, recruitment processes, insurance decision making, credit referencing, underwriting loans and anti-money laundering and fraud detection processes. AI is also being embraced in the capital markets. According to the IMF, two thirds of cash equity trading is now associated with automated trading.
Given the vast scope and scale of data entering financial services businesses, it is important to consider whether AI tools are appropriately making decisions that are not biased or skewed to avoid legal claims, fines from regulators and deep reputational damage.
These are well-recognised risks. According to the European Banking Authority (EBA), "the use of AI in financial services will raise questions about whether it is socially beneficial [and] whether it creates or reinforces bias" and the Centre for Data Ethics and Innovation's AI barometer has reported that "bias in financial decisions was seen as the biggest risk arising from the use of data-driven technology".
Some firms have already faced scrutiny for some of AI-led decision making, which some have perceived as biased against particular groups of people.
Definitions of bias differ and depend on the context in which they are used. The EBA has referred to bias as "an inclination of prejudice towards or against a person, object, or position." In contrast, a European Commission technical definition describes bias as "an effect which deprives a statistical result of representativeness by systematically distorting it". A Cambridge English dictionary definition sets out that bias is "the action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment". Commonly, these definitions all highlight that, in reality, bias can arise in AI systems inadvertently and unconsciously as a downstream process.
AI systems are built on sets of algorithms that "learn" by reviewing large datasets to identify patterns, on which they are able to make decisions. In essence they are as good as the data they feed on. There are a number of ways in which AI systems can develop bias. A few of these include:
All of these examples of bias can lead to discrimination in financial services. The EBA has highlighted the circumstances of a class of people less represented in a training dataset receiving less or more favourable outcomes as a result of what an AI system has learned as one example of such potential discrimination.
UK financial regulators have not yet provided detailed guidance on the steps they expect regulated firms to take but, along with other industry bodies, they have provided an indication on the steps they would expect firms to take in their engagement with AI solutions.
When procuring AI systems it would be useful to assemble a team of individuals covering multiple disciplines. The Office of AI in the UK, for example, recommends requiring suppliers to assemble teams that could include individuals that have domain expertise, commercial expertise, systems and data engineering capabilities, model development skills – for example in deep learning, data ethics expertise and visualisation or information design skills.
A potential customer of an AI solution should also consider how diverse the supplier's programming team are and whether or not they undertake relevant anti-bias and discrimination training. This will draw upon perspectives of individuals from different genders, backgrounds and faiths, which will increase the likelihood that decisions made on purchasing and operating AI solutions are inclusive and not biased.
Whether the supplier has an open and progressive culture that incentivises and encourages their developers to spot errors arising from the AI solution may also be a factor to indicate that adequate processes to protect against bias are in place.
Regulators will also be keen to see that businesses have the appropriate oversight functions and controls in place. Firms should ask suppliers what controls and monitoring tools they have to ensure that new data entering the data pool is of high quality, and how this could be reported on and reviewed during governance meetings. Some businesses have developed tools aimed at determining whether a potential AI solution is biased or not.
From a compliance perspective, organisations should document their approach to tackling bias and discrimination from the outset and keep this under review at each of the main stages of an algorithm's development and use.
Different levels of insight into the decision making process underpinning the solution will be required to reflect different needs. For example, the information reviewed by the board of directors would focus on assisting them to determine whether appropriate business outcomes are being achieved, whilst a technical analyst will need more detailed technical information to determine whether the coding and datasets are producing fair and accurate results. Guidance produced in the UK has stressed the importance of explainability of AI-driven decision making.
Accurate, robust and contemporaneous record keeping is important to enable firms to prepare for potential disputes that could arise in the future.
Where AI tools make decisions based on customers' data, organisations will need to undertake an impact assessment of the AI technology and consider how the decision making process may impact their customers, particularly if they are vulnerable, and whether or not the decisions are transparent and explainable. The EBA for example has highlighted that "adequate scrutiny of and due diligence on data obtained from external sources" could be included in risk assessments.
The Financial Conduct Authority (FCA) has indicated that directors need to consider that, on top of how transparent AI decision making is, "what the business outcome will be" when engaging with AI technologies.
Looking at robo-advice as an example, this has been seen as a low cost and highly efficient way of assisting consumers who would benefit from financial advice but are either unwilling or unable to pay for that advice to better manage their money and make more informed investment decisions. One of the considerations firms have had to take is that the customers who fall within the investment-advice gap include vulnerable people.
Suitability safeguards need to be applied to ensure that customers are protected and the right business outcomes are achieved.
More generally, a customer should review the company's AI experience in the market, and whether they have scaled AI models to meet a customer's requirements in parallel to managing bias and discrimination risk on a larger scale.
While a significant amount of the risk presented by AI technologies cannot realistically be dealt with at a contractual level, some core issues can be addressed. Where a customer is buying development services for an AI solution there are a complex balance of risk factors, upon which the customer and the supplier will need to negotiate. This is particularly the case where the AI technology is partially trained on customer or third party data.
Some of the basic contractual options open to firms when considering bias risk include:
While AI can assist in automating decision making processes and delivering cost savings, firms should carefully consider the AI tool being sourced and commit resources towards monitoring the solution to ensure that biased decisions are not being made.
Businesses will need to review and further understand who their customers are, what demographics they fall into and the social challenges they face in order to develop a transparent and accountable platform that drives good outcomes for customers.
Businesses should also consider engaging with collaborative industry initiatives to share best practice and knowledge on the development of AI. Given that the law in this area is likely to change in light of advancements in AI technology, particularly trust in AI, industry bodies such as UK Finance have advised that it is important to be aware of the potential changes in legislation and contribute to the dialogue in this area.
The bias in AI systems presents both a challenge and opportunity for technology developers and business users – the challenge being to translate non-discriminatory human values into coding. The opportunity would therefore be the development of AI tools which humans can trust. This will translate into a greater uptake of AI tools by businesses across all sectors and provide opportunities for competitive advantage.
Hussein Valimahomed and Luke Scanlon are experts in AI in financial services at Pinsent Masons, the law firm behind Out-Law.