The UK's Competition and Markets Authority ('CMA') in its recent initial report dated 18 September 2023 examined AI Foundation Models ('FM'/'FMs') and their potential effect on competition in the UK and worldwide, and set out its views in relation to:
- How FMs are developed, the key inputs they require and how they are deployed today;
- The potential outcomes for competition in the development of FMs;
- The impact of FMs on competition in other markets and the potential outcomes for competition;
- The potential outcomes for consumers;
- The potential role for regulation in enabling positive development and outcomes;
- Proposed competition and consumer protection principles that will guide the development of the market; and
- The next steps for the CMA.
What is a 'Foundation Model' and how are they developed?
FMs are large machine learning models trained on a vast quantity of data at scale which can be adapted to a wide range of tasks and can form the basis of AI products which are specialised for use in commercial and consumer functions. OpenAI released the first model based on transformer architecture, GPT, in 2018. Models based on this architecture have since become known as FMs. The likes of Google, Meta and Microsoft have since gone on to develop and release their own FMs.
Developing an FM requires: computing power, vast quantities of data, technical expertise and capital.
What effect could FMs have on consumers?
New AI services have the potential to enhance consumers' experience of products and services, but it is important for developers and businesses to ensure that the outputs consumers receive are reliable, accurate, and fair. FM technology raises vital concerns relating to safety, security, copyright, privacy, and human rights. These can be protected by focusing on model accuracy, accountability of developers, informing consumers of risks, and clear plans for reacting to harm caused by FMs.
To prevent harm from occurring, FM output accuracy must be monitored and standards must be imposed at a regulator or governmental level. Currently, FMs can suffer 'hallucinations' – the convincing but factually false outputs made by models when they use the wrong information to feed their logic algorithms. According to the CMA report, competition in the market is the best way to ensure FM developers have an incentive to remove hallucinations and improve product quality.
From a regulatory perspective, the existing mechanisms in the technology sector are not compatible with FM technologies. More complicated, autonomous models create a complex web of accountability, and developers could argue the model is at fault, rather than its human creators. The CMA believes consumer protections are most effective where accountability is at the forefront of policy.
What are the potential impacts of FM technologies on competition?
To maximise the potential use of FMs for consumer and commercial contexts, continuous and effective competition among developers is needed to create high-spec FMs to be used in a variety of end-user products. The CMA considers the way to ensure this is to maintain multiple independent developers competing to produce exceptional models. In a competitive market, innovative firms have access to necessary resources to enter, expand, and trade effectively, allowing experimentation with different business models and forms of monetisation. This includes providing FMs on an open-source basis, provoking further innovation even after the point of licence or sale.
However, if entry to a market or basic resources are restricted, fewer firms would maintain competitive standards in their models, leading to market dominance and eventually oligopoly. This is likely to lead to reduced incentives to innovate within AI and harm the rich UK export market for emerging technologies.
The CMA also considers that existing firms of a certain size and experience (e.g., Amazon, Google, Microsoft) will have a head-start on other developers due to lower input costs and brand recognition. Further, a need for funding and technical expertise to build complex models would be significantly less of a hurdle for established multi-billion-dollar corporations with a history of working with emerging technologies.
The CMA's particular areas for concern are:
- the need for proprietary data to train FMs;
- the need for larger models;
- the need for a substantially innovative product;
- advantages held by established corporations; and
- challenges facing open-source models.
Regulation of FMs
The regulation of FMs will play an important role in protecting consumers. Said regulation will need to strike a balance between policy objectives, ensuring it is pro-innovation and achieves the best outcomes for consumers and the UK economy. According to the CMA, any form of regulation will need to be proportionate and targeted at specific risks in order to allow for competition and innovation to grow.
The CMA already has a catalogue of existing powers to address any competition or consumer issues which may arise such as, taking action against businesses and individuals partaking in cartels or other anti-competitive behaviour, protecting consumers from unfair trading practices and conducting investigations into entire markets if the CMA considers there are competition or consumer issues.
Further protections will be available once the Digital Markets, Competition and Consumer Bill comes into force.
The CMA is yet to set out exactly how this regulation could look, however the CMA has identified 6 competition and consumer protection principles that it considers can best guide the future development and deployment of FMs:
- Access – "ongoing ready access to key inputs"
- Diversity – "sustained diversity of business models, including both open and closed"
- Choice – "sufficient choice for businesses so they can decide how to use FMs"
- Flexibility – "flexibility to switch or use multiple FMs according to need"
- Fair dealing – "no anti-competitive conduct, including anti-competitive self-preferencing, tying or bundling"
- Transparency – "consumers and business are given information about the risk and limitations of FM-generated content so they can make informed choices"
Next steps for the CMA
The CMA plans to continue its work in refining the principles laid out in the report and take views from across the spectrum of industry. It hopes to shape the ideas into something to support all the different stakeholders in the AI space, aiming to publish an update in early 2024.
For more information on the key risks and mitigation steps which companies should consider before adopting generative AI solutions take a look at our Report: Generative AI: Key risk and mitigation steps.