Last month, we hosted a panel event on the use of AI in Financial Services in conjunction with UK Fintech Week. Partner and leading AI expert Emma Wright led the panel in a varied and interesting discussion about the current state of play as well as outlooks on the future of AI in the financial services industry. Here we touch on four key themes that came out of the discussion in a bit more detail.
1. High Risk AI Systems
The AI Act takes a broadly risk-based approach to the monitoring and governance of AI systems, with those deemed to be the highest risk banned completely, and those deemed to pose the least risk having the fewest additional compliance requirements.
Fintechs and other financial institutions should start reviewing and classifying the AI systems that they use ahead of the AI Act coming into force, as those categorised as “high risk” attract the most significant compliance burden. Whilst a lot of these requirements will not be unfamiliar to financial institutions used to dealing with a heavy regulatory load, there are requirements unique to the AI Act, such as informing the provider or distributor of risks and incidents, and keeping and monitoring automatically generated logs. Furthermore, for those companies operating within the financial services sector but falling outside of the scope of a lot of the existing regulation and oversight, a lot of this will be new.
Those fintechs and other financial institutions operating in or providing services relating to assessing creditworthiness as well as pricing and risk assessments in life and health insurance are most likely to be utilising high risk AI systems. In addition, all companies should be mindful of deploying AI systems intended to be used in the recruitment or candidate selection process, including to place targeted job adverts, to evaluate candidates, and to analyse and filter job applications, which may also be considered high risk.
2. Garbage in, garbage out
Whilst the AI Act has a focus on the use case(s) of the relevant AI system when classifying risk, another point that was raised by all our panellists is that the use and the output of any AI system is only as good as the input to that AI system, or in other words the data that has been used for its training and validation. An AI system can have an incredibly sophisticated algorithm, but if the input data is poor quality then the output will likely be poor quality as well.
Data can be poor quality in a number of different ways, from inaccurate and inconsistent, to outdated, irrelevant for the particular use, and incomplete. All of these factors can significantly impact the output of AI systems in different ways leading to incorrect results, bias and discrimination. If the AI system is further trained on the results it produces then this will serve to compound the problem.
This is obviously not unique to the financial service sector, but due to the existing significant regulatory framework applicable to the sector, the size and importance of the sector, and the potential consequences of poor data input and therefore output, it is particularly prescient.
3. Consumers
A key feature of a lot of the financial services industry is the vast number of consumers accessing and using key services every day and consequently the volume of consumer data that is stored and generated. For regulated firms, the FCA’s Consumer Duty is relevant when it comes to deploying AI systems. For example, the duty requires firms to take into account the different needs of their customer base, including vulnerable customers as well as those with protected characteristics. “One size fits all” AI systems will not necessarily allow for compliance with this, so firms will need to think carefully about how their targeted use of AI can help them to achieve this nuance, and this also links to the issue of data quality discussed above.
When it comes to consumer personal data, firms are by now used to the requirements of data protection legislation. Fintechs and other financial institutions will need to think about how these requirements manifest when deploying AI systems – in particular, transparency obligations to consumers in terms of how their personal data is being used and for what purpose, allowing consumers to exercise their data subject rights, ensuring that impact assessments are completed where appropriate, and aligning the use of AI with the rules relating to decisions made by so-called “black box” algorithms.
4. Operational Resilience
As AI systems develop and mature, the likelihood is that such systems will become more embedded within the functions of the financial services sector, which will in turn become more reliant on such systems and their creators and distributors.
The general trend with other IT products, services and systems, such as cloud, has been a movement away from developing in-house capabilities towards outsourcing to third parties, a trend which we are likely to see replicated with AI. In 2021, the Financial Policy Committee concluded that “the increasing reliance on a small number of [cloud and other critical providers] for vital services could increase financial stability risks in the absence of greater direct regulatory oversight of the resilience of the services they provide”, a statement which could equally be made in respect of the use of AI.
With outsourcing in financial services expected to increase, and a concern from regulators that such outsourcing could be concentrated within the hands of a few big players in the market, a focus on operational resilience not only in the financial services sector but also the outsourced providers is crucial. This is already playing out in a number of new regulatory proposals and legislation coming into effect, including the Digital Operational Resilience Act as well as the Critical Third Party monitoring regime in the UK. Fintechs and financial institutions deploying and using AI systems as well as those entities developing and distributing AI systems will need to be cognisant of these requirements.
In order to receive updates from Harbottle & Lewis, and to receive invites for our remaining AI event series, please sign up here.