In January 2022 the UK Government launched the AI Standard Hub, as part of the National AI Strategy, a new initiative intended to lead in shaping standards for Artificial Intelligence (AI) globally. In response, the FCA states that it has made advancements in its regulatory techniques to both support the development of AI and maintain market integrity. The FCA has noted its intention to strike a balance between being sufficiently proportionate to allow firms to fairly utilise the development of AI but also suitably robust to ensure that innovations are executed in a secure manner.
On 12 July 2023, Nikhil Rathi, the FCA Chief Executive delivered a speech at The Economist which made it clear that "AI can offer opportunity", but that it "must be clear where responsibility lies" when the use of AI services goes wrong. Given the FCA's increased focus on the use of AI in regulated firms, it is vital that firms are alive to (1) what the FCA expects of them and (2) how the FCA might enforce breaches of regulatory rules resulting from the use of AI.
What the FCA expects of firms
In the wake of significant movements in the technology space, the FCA has increased its expectations of firms in terms of adopting new ways of working through the use of AI. The FCA expects authorised firms to harness the opportunities that advanced technologies present in a safe and orderly way. The FCA is optimistic that AI can greatly benefit the financial industry by improving financial models, delivering accurate information to everyday investors and tackling fraud and money laundering.
To assist firms with reaching this goal, the FCA has created two new tools which it claims support innovators by driving efficiency and reducing the regulatory burden on firms. Firstly, the Digital Sandbox which provides access to, amongst other things, market data, solutions development, academics and prototyping. Secondly, the Digital Front Door which the FCA describes as, a user-friendly experience aimed at simplifying the regulatory journey by digitising forms, evolving data processes and providing targeted innovation services.
The FCA reminds firms that accountability should be kept in mind when developing AI. When using critical third parties, for example cloud-based computing services, the FCA expects firms to be clear where responsibility lies to protect consumers. Additionally, firms' defence mechanisms should be improved to deal with these advancements. For example, they are expected to accelerate investment in fraud prevention, minimise misinformation and act as gatekeepers of data.
How might the FCA enforce this?
While the FCA does not regulate technology, it does regulate the effect and use of technology in financial services. The FCA says that it aims to maximise innovation and minimise risk, and that it has accordingly introduced new regulation and enhanced supervision tools to enforce its increased expectations.
Specifically, the FCA has stated that it will be regulating firms that are designated as critical third parties (where they underpin financial services and can impact the stability and confidence in the markets) by setting standards for their services, which include AI. Further, it refers to both the Consumer Duty and the Senior Managers & Certification Regime as frameworks it can use to respond to innovations in AI. More generally, the FCA states that it has developed supervision technology to monitor portfolios and identify risky behaviours by investing in tech horizon scanning and synthetic data capabilities.
In response to the Government's call for the UK to be the global hub of AI regulation, the FCA will want to be seen as actively regulating this new technological space. With that in mind, the FCA has produced a large amount of commentary regarding AI including the following speeches: Innovation, AI & the future of financial regulation | FCA; Building better foundations in AI | FCA; AI: Moving from fear to trust | FCA. Whilst the FCA will want to be seen to be facilitating the use of new technology and to be proportionate in its oversight of it, it will need to strike a balance between that and its enforcement activities in order to mitigate the harms that can stem from AI use.