In the FCA's report, Algorithmic Trading Compliance in Wholesale Markets (the 'Report'), published in February 2018, it recognised that firms operating in wholesale markets were increasingly using algorithms for a number of purposes across their trading activity. In 2019, JPMorgan estimated that only about 10% of US equity trading was then being done by traditional investors. Given the recent COVID volatility in the markets and the widespread use of algorithms, firms would do well to consider the implications of their AI algorithm making the 'rational' choice to engage in market manipulation to maximise profits.
The FCA's concerns about the potential market abuse that an AI algorithm could cause can be seen in its recent Business Plans and communications. In light of COVID-19, the FCA has reiterated its expectation that 'firms should continue to take all steps to prevent market abuse risks…and [the FCA] will continue to monitor for market abuse and, if necessary, take action'. It is likely that the FCA has its eyes on certain actors in this area. Given current market conditions, firms should act now to avoid potentially being made an example of.
What is algorithmic trading?
The FCA defines algorithmic trading as 'trading in financial instruments which meets the following conditions: (a) where a computer algorithm automatically determines individual parameters of orders such as whether to initiate the order, the timing, price or quantity of the order or how to manage the order after its submission and (b) there is limited or no human intervention.'
The FCA's thinking
Data ethics and specifically, algorithmic decision-making, was a cross-sector priority for the FCA in its Business Plan 2019-2020 and continues to be a priority in its recently published Business Plan 2020-2021. In this latter Plan, the FCA has re-emphasised its interest in AI and the importance of ensuring safe, appropriate and ethical use of new technologies.
Readers may be interested in the words of Julia Hoggett, the FCA's Director of Market Oversight, who, in February 2019, at the AFME 'Implementation of the Market Abuse Regulation' event said: "I can see a world where seemingly ‘rational’ AI, unconstrained and exposed to certain markets and data, would deem it entirely rational to commit market manipulation. Now, the FCA cannot prosecute a computer, but we can seek to prosecute the people who provided the governance over that computer."
Possible approaches to the FCA's investigation
Who may potentially be the target of an FCA investigation? Perhaps the people with algorithmic trading as their certified function? This certified function encompasses those involved in the deployment of the trading algorithm and those having significant responsibility in ensuring it is compliant with the firm's obligations. Have the appropriate people been certified? Should this include people from the first, second and third line of defence? Or, should responsibility 'roll up' to the top, to the designated senior manager, especially where that person's statement of responsibility spells out that they have ultimate regulatory responsibility? Could it be the firm as a whole given the diffusion in decision-making?
One intriguing question is about the AI aspect of the algorithm. If it is artificially intelligent and capable of learning on its own, does it not, to an extent, have a will of its own? If it is autonomously making decisions, does this affect its creators' responsibilities? Does it amplify the responsibility of those overseeing it in the market? If it is 'off the shelf', can the authorised firm that purchased it escape responsibility? But, perhaps it has been modified by the authorised firm? These are some of the questions the FCA may have to untangle when it, inevitably, deals with such an investigation.
Consistent with its FCA Mission: Approach to Enforcement document dated April 2019, if the FCA were to detect 'serious misconduct' in a case of this kind, it seems likely that relevant senior individuals will be placed under investigation to determine where, if at all, responsibilities lie. When investigating, the FCA may begin by considering statements of responsibility and management responsibilities maps.
Points for reflection
Building on the Report's findings and wider thinking, firms may wish to consider the following when reflecting on this topic:
- Development and testing (D&T)
- D&T framework: Is there a clear methodology for D&T to ensure the algorithmic trading system: behaves only as intended, complies with the firm's obligations, complies with the rules of the relevant trading venue(s), and does not contribute to disorderly trading?
- Sign off: Has there been appropriate challenge by a range of objective, competent and informed parties prior to sign off? What will the FCA read into this about a firm's culture?
- Documentation and audit trail: Is the documentation and audit trail sufficient throughout the D&T process to illustrate why decisions were made and how they were tested?
- Governance and oversight
- Senior management: Can senior management articulate the rationale for decisions made? Do they have suitable information, for example MI, to assess the situation in an informed way? Has a senior manager been designated specific responsibilities in their statement of responsibility? If so, how are they evidencing their decision-making?
- Role of compliance: Is compliance able to identify and reduce algorithmic trading risks? From a functional perspective, has compliance been involved at all stages? From a technical understanding perspective, does compliance have the ability to constructively challenge?
- Other functions: Which other functions have been involved in governance and oversight? What will the FCA read into this about a firm's culture?
Conclusion
The FCA's concerns about the potential market abuse that an AI algorithm could cause is apparent. Its expectations in this area are explicit. The FCA has placed emphasis on the interplay between technological developments and market abuse. It may not be too long before we see it taking action in this area.