On 2 February 2025, Chapters I and II of the European Union’s AI Act came into force. The two key provisions that have taken effect concern prohibited AI practices and AI literacy. This article explains their implications to both providers (developers) and deployers (users, except for personal and non-professional activities) of AI systems and suggests steps to take now to plug the compliance gap.
Businesses based outside the EU should note that the so-called ‘Brussels effect’ applies to the Act, meaning non-EU businesses implementing systems in the EU, serving EU-based users, or utilising AI outputs within the EU will still fall within its scope.
Prohibited AI practices under the EU AI Act
Article 5 of the EU AI Act lists AI practices deemed unacceptable due to their potential risks to EU values and fundamental rights. These should be read alongside the European Commission’s recently published draft guidelines. While not legally binding, the guidelines — which await formal adoption — will likely inform regulators’ and the courts’ interpretation of Article 5.
The legislation prohibits the placing on the market (by supplying it in a commercial context, even if free of charge, for first distribution or first use in the EU), putting into service (for first use by either the relevant provider itself, including for in-house development, or by a deployer for its intended purpose) and the use of AI systems3 in the following contexts.
Harmful manipulation or deception
The Act bans AI systems from employing subliminal, manipulative or deceptive techniques that can or do distort human behaviour by impairing individuals’ informed decision-making, leading them to take a potentially harmful course of action they otherwise would not have done. Assuming there is a causal link between the AI’s subliminal techniques and the distortion of behaviour, this will still be unlawful if a user is made aware of the AI’s influence.
Banned practice example: Using AI-powered rapid image flashes to influence purchasing decisions.
Exploitation of individuals based on their age or disability related vulnerabilities or socioeconomic situation
This is banned if it does, or is likely to, distort people’s behaviour in a potentially harmful way.
Banned practice example: Using an AI system that targets older individuals with unnecessary medical treatments or insurance policies.
Social scoring leading to unjustifiable detrimental treatment
The Act prohibits using AI to classify or evaluate people based on their social behaviour or personality traits if this results in their detrimental or unfavourable treatment in a disproportionate or unjustified way, or in contexts unrelated to that in which the relevant data was originally collected.
Banned practice example: A social welfare agency’s use of AI to estimate the likelihood of benefit fraud in a way that relies on characteristics inferred from irrelevant social contexts, such as the relevant beneficiary’s social media activity.
Predictive policing
The Act bans predicting the risk of someone committing a crime solely based on profiling or assessing their personality traits, except where this is to support an existing human and fact-based assessment of someone’s involvement in a crime.
Banned practice example: Deploying an AI system to predict tax offences based on individual profiles built by the system by analysing characteristics such as number of children or place of birth and where there are no verifiable facts supporting the likelihood of criminal activity.
Facial image scraping
AI systems may not be used to create or expand a facial recognition database through untargeted scraping of facial images from CCTV footage or the Internet.
Banned practice example: Using AI to extract facial features from photographs of individuals extracted from various social media platforms using an automated image scraper and compile these into mathematical constructs for indexation and future comparison.
Emotion recognition in a workplace or educational setting
Using AI to identify or infer someone’s emotions in a professional or educational institution is prohibited, except for medical or safety reasons such as improving accessibility for deaf or blind persons.
Banned practice example: A call centre’s AI-powered detection of anger in an employee’s voice during their interactions with customers.
Biometric categorisation to infer or deduce sensitive information
AI systems that categorise individuals based on biometric data to infer or deduce sensitive data such as race, political or religious beliefs, or sexual orientation from biometric data are not permitted (except in certain law enforcement contexts), save that lawfully acquired biometric datasets may be labelled or filtered.
Banned practice example: Using an AI system to analyse photos posted on social media by an individual to infer their political beliefs so that they can be sent targeted political messages.
Real-time remote biometric identification for law enforcement in public spaces
The use of AI systems in this way for law enforcement purposes is only permitted if it is strictly necessary to:
- undertake a targeted search for missing persons or victims of certain crimes;
- prevent a specific, substantial and imminent threat to life or physical safety, or prevent a genuine terror threat; or
- find or identify a suspected criminal as part of an investigation or prosecution, or to enforce certain criminal penalties. This prohibition does not apply to the placing on the market and putting into service of an AI system for this purpose.
Banned practice example: Using AI-based facial recognition to match the faces of individuals recorded on CCTV installed at a train station’s entrance against a general watchlist of individuals suspected of having committed a broad, non-specific range of historic crimes covering a decade.
Where a prohibition refers to using AI systems in a harmful way, it will apply irrespective of whether the harmful effect was intended. If, for instance, a chatbot meant to promote healthier lifestyles began encouraging users to engage in excessive physical exertion without breaks, then, if a user would be likely to follow that advice in a way that could lead them to suffer a heart attack (and would not have otherwise done this), this may well fall within the scope of the first prohibition if a likely link can be established between the chatbot’s manipulative techniques and the user’s altered behaviour. Stakeholders in the AI lifecycle should therefore keep in mind what could foreseeably happen, notwithstanding the original intention.
Each prohibition will apply only if specific conditions set out in the Act are fulfilled. In many cases, the Act’s recitals provide a helpful gloss. Recital 18 clarifies that physical states do not constitute emotions, so an AI system designed to detect pilots’ levels of fatigue to prevent aviation accidents would not, for example, fall within the scope of the emotion recognition ban. Businesses should not, therefore, assume the blanket application of a prohibition without examining the relevant conditions attached to it. At the same time, it is worth highlighting that an AI system which would have been prohibited were it not for an exception will most likely qualify as a ‘high-risk’ system and be subject to chapter III of the Act, most of which takes effect on 2 August 2026.
It is noteworthy that these prohibitions took effect merely days before the Vice-President of the USA, J D Vance, cautioned against adopting 'overly precautionary' regulations to AI — a sentiment that may have influenced the UK government's decision to follow the US's example in refraining from signing this year's AI Safety Summit declaration. While there are certainly grounds for criticising the EU AI Act's scope, given the restrictions it places on providers and deployers at a time when the full potential of AI-based technologies has yet to be realised, the practices that it prohibits are unlikely to be controversial to the majority of EU citizens insofar as their remit captures specific practices considered too invasive to be permissible. As such, the legislation aligns with the broader objectives of the European Commission to provide a consistently high level of protection to those citizens' fundamental rights and provides welcome clarity to businesses on what would contravene core values held across the Union.
Understanding AI literacy requirements in the EU AI Act
Article 4 of the Act requires providers and deployers to take measures to ensure, to their best extent, that their staff and other persons dealing with AI systems on their behalf (including consultants) have a 'sufficient level' of AI literacy.
Given this requirement applies to all AI systems and not just those considered ‘high risk’ under the Act, AI literacy remains a nebulous concept despite Article 4 having entered into force. Recital 20 of the Act states that the aim is to enable stakeholders to make 'informed decisions', including by way of understanding the application of technical elements during AI systems' development, operational measures to implement when using them, appropriate interpretation of outputs and — for those affected by AI-based decisions — the ways in which they may be impacted.
This could be an overwhelming task for businesses, since the above entails not only ethical questions but also the societal, economic and political implications of the relevant AI systems. In addition, establishing what constitutes an objectively sufficient level of AI literacy in respect of AI systems classified under the Act as having limited or minimal risk (in other words, the majority of AI systems) is not straightforward in light of the Act's omission of any clear methodology.
As such, the intended effect of Article 4 may well be hampered by this lack of clarity until, as is expected, guidance from the European Artificial Intelligence Board, potentially alongside codes of conduct drawn up by EU Member States, is published. In the meantime, businesses can consult a list of ongoing AI literacy practices collated (but not endorsed or approved by) the European Commission.
Enforcement and practical steps
The Act does not mention penalties for non-compliance with the AI literacy requirements, but false reporting on AI literacy could result in a fine of up to €7.5 million or 1% of annual turnover, whichever is greater.
Prohibited AI practices will incur the highest fines under the Act: up to the greater of €35 million or 7% of annual global turnover.
Enforcement mechanisms under the Act take effect on 2 August 2025. Organisations should use the intervening period to take steps to ensure compliance with Articles 4 (AI Literacy) and 5 (Prohibited AI Practices). These might include the following:
- Making an inventory of all AI systems within the business, assessing their risks and benefits and ensuring the ways they are used do not constitute prohibited practices. Remember to assess the use of the AI system rather than just an assessment of the AI system itself (as the same system could be used for distinct purposes).
- Creating AI literacy resources such as training programmes and policies on responsible AI use. Implementing regular reminders (eg pop-ups before gaining access to an AI system) may improve compliance.
- Developing training programmes tailored to the relevant personnel. This may include base-level education for the whole organisation with more bespoke training for heavier users. Amending existing GDPR training infrastructure may help to minimise additional training time.
- Establishing AI governance policies to regulate AI system development and deployment, potentially assigning responsibility to an AI-focused compliance officer or committee.
- Contractualising critical legal requirements by requiring vendors of AI systems to warrant that they have undertaken the appropriate risk assessments or, if the business is supplying an AI system, by requiring an undertaking from the customer not to use the AI system in any way that could cause its use to fall within the scope of Article 5.
- Ensuring transparency and accountability by documenting the purpose, data sources, use case and decision-making processes of each AI system used within the business including, where appropriate, informing the wider public about such use (eg by referring to any AI systems handling personal data in relevant privacy policies).
In the vein of the initial preparations for the entry into force of the GDPR, businesses that begin now to invest in compliance with the EU AI Act will likely be well placed to navigate the burgeoning international landscape of AI regulation.