Mishcon de Reya page structure
Site header
Menu
Main content section
Artificial Intelligent

A review of the UK Government AI security guidance

Posted on 10 February 2025

The "Implementation Guide for the AI Cyber Security Code of Practice" is designed to assist stakeholders in the AI supply chain, including developers and system operators, in implementing the UK Government's voluntary Code of Practice for AI cyber security. It provides guidance on implementing security measures across various AI applications, such as chatbot apps, machine learning (ML) fraud detection systems, and large language model (LLM) platforms.  

We found it appropriate to review this guidance using traditional human expert analysis, complemented with the insights from our in-house AI tool, deReyAI. 

In short, the guidance is a very positive step towards improving and standardising the security of AI systems. Notably, it addresses aspects of responsible AI, such as copyright violations, bias, unethical or harmful use, and legal or reputational risks, which are often overlooked in wider cyber security approaches.  

AI security 

AI systems, particularly those involving ML, are subject to unique security vulnerabilities that are not typically present in traditional cyber security contexts. These vulnerabilities arise from the nature of AI systems which include components such as models that can recognise patterns in data without explicit programming. 

Sophisticated parties can exploit the vulnerabilities inherent in ML components. Doing so can cause the ML parts of larger AI systems to exhibit unintended behaviours in those AI systems. For example, it may affect the model's performance, cause it to allow unauthorised actions, or enable the extraction of sensitive information. 

This complex area needs specialised solutions. AI systems introduce new attack vectors, such as prompt injection attacks in LLMs or data poisoning, where attackers deliberately corrupt training data or user feedback. These methods are unique to AI systems and give rise to their own security considerations. 

However, although AI systems have unique vulnerabilities, the industry approach favours integrating AI software development into existing secure development and operations best practices. This is where the new guidance is likely to be most applicable.  

Applicability 

The guidance is a good starting point for implementing security in the AI lifecycle, and will provide a considerable 'leg up' to other teams. That said, there will now be a need to translate between this guidance, and more experience-based examples for what is possible in reality.  

From our experience of standards-based approaches, we believe there are some significant downsides to this form of guidance. It has been possible to be entirely standards compliant and still have poor security. It is also possible for a dogmatic focus on compliance to work against security outcomes, when being compliant becomes a focus in its own right. This is especially likely when compliance and security budgets are shared.  

AI security principles 

The guidance sets out 13 principles for AI security, mapped to the AI development lifecycle. Each principle explores related threats and risks, and provides example controls with a level of detail to allow implementation.  

These principles could fit well inside other frameworks, such as the Generative AI Framework for HM Government, with a single principle around security alongside guidance from the National Cyber Security Centre (NCSC) around secure AI system development.  

 

Principle 

Description 

Raise awareness  Educate stakeholders about AI-specific security threats and risks. 
Design for security  Ensure AI systems are designed with security considerations from the start - integrating security into the AI system's design phase. 
Threat evaluation  Evaluate threats and manage risks to AI systems. 
Human responsibility  Enable human oversight and responsibility for AI systems. 
Asset protection  Identify, track, and protect AI assets. 
Infrastructure security  Secure your infrastructure and access controls. 
Supply chain security  Secure your supply chain for AI systems. 
Documentation  Document your data, models, and prompts. 
Testing and evaluation  Conduct appropriate testing and evaluation of AI systems. 
10  Communication  Communicate with end-users and affected entities. 
11  Security updates  Maintain regular security updates, patches, and mitigations. 
12  System monitoring  Monitor your system’s behaviour for security compliance. 
13  Data and model disposal  Ensure proper data and model disposal. 

 

However, although a helpful starting point, it seems inevitable that the example controls set out in the current guidance will need further expansion, in particular, when they are implemented in reality, and feedback and effectiveness measurement is available. The controls currently proposed may also be difficult for smaller or less mature organisations, especially start-ups producing AI tools for consumption by clients.  

Implementation scenarios 

The guidance sets out four example scenarios. These are good examples of current use cases, and would be applicable to most supplier scenarios.  

Chatbot app 

This scenario involves an organisation using a publicly available LLM via APIs to develop a chatbot for both internal and customer use. Examples include a large enterprise using a chatbot for customer service, a small retail business handling online shopping queries, a hospital providing health advice, and a local council offering guidance on planning applications. 

ML fraud detection 

A mid-sized software company uses an open-access classification model, further trained with additional datasets, to develop a fraud detection system. This system focuses on identifying fraudulent financial transactions without linking decisions to personal characteristics. 

LLM provider 

A tech company develops a new multimodal LLM capable of understanding and generating text, audio, and images. This model is provided as a commercial API for applications such as virtual assistants and media generation. 

Open-access LLM 

A small organisation develops an LLM for specific use cases, such as legal research or providing localised advice to farmers. The organisation plans to release it as open-access and monetise through support agreements. 

These scenarios overlap with those in other guidance, such as those from the generative AI guidance, which may provide more detail. 

Wrapping up 

This guidance feels like a move from asking "what is needed" to the start of establishing "how it can be achieved". There is still a need for further practical resources, training and real-world examples, but the 13 principles provide a comprehensive framework for ensuring the security and ethical deployment of AI systems.  

By addressing a wide range of potential threats and offering detailed measures for mitigation, the document serves as a valuable resource for developers, system operators, and other stakeholders involved in the AI supply chain. The emphasis on continuous monitoring, regular updates, and proactive risk management underscores the dynamic nature of AI security and the need for ongoing vigilance. 

Moreover, the guidance aligns with international standards and regulatory requirements. As such, it offers a robust foundation for organisations seeking to comply with the evolving legal and ethical standards in this area. The inclusion of practical examples and scenarios further enhance the document's utility, allowing stakeholders to contextualise the principles within their specific operational environments. This approach not only facilitates better understanding, it also encourages the adoption of best practices across diverse sectors. 

Ultimately, the guidance on the 13 principles represents a significant step forward in promoting responsible AI development and deployment. By fostering a culture of security and accountability, it helps build trust in AI technologies and ensures they are used in ways that benefit society as a whole. As AI continues to evolve, adherence to these principles will be crucial in navigating the complex landscape of AI ethics and security. 

 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else