Mishcon de Reya page structure
Site header
Main menu
Main content section
ai tech blockchain

Navigating the AI landscape: A guide for charities on opportunities, risks and compliance

Posted on 7 October 2024

Generative AI (GenAI), as popularised by ChatGPT, is becoming an ever more essential assistant to people's everyday lives. As the use of GenAI grows both at home and in the workplace, almost every organisation must assess the opportunities, evaluate their current usage, and ensure that this use is responsible and as low risk as possible. This includes charities. In fact, the Charity Commission has published an article discussing the use of AI amongst charities and the importance of being aware of the opportunities and risks involved. GenAI is most likely to present the biggest opportunities and risks to charities and so is the focus of this article. 

At the intersection between charity law and regulation and the evolving AI regulatory landscape, charity trustees must ensure that they comply with their legal duties when putting their innovation into practice, as well as other relevant laws and regulations. This is particularly challenging at present, given the uncertainty that a change in government brings. The King's Speech, which set out Labour's legislative plans for the current session of Parliament, noted that the Government would be introducing an AI Bill, but the detail of this is awaited. With the EU AI Act now in force (see our AI resource centre for details of the implementation timeline), it will be interesting to see how the UK approaches the question of AI regulation.   

This article, which has been jointly written by our charity lawyers and our AI lawyers, summarises the opportunities of GenAI adoption for charities, as well as the key risks and how charities can best avoid or mitigate them.   

Opportunities  

According to the Charity Commission, over 50% of charities are either already using AI tools for certain tasks, or have plans to do so in the future. This is unsurprising considering the myriad opportunities AI presents for charities, such as: 

  • Streamlining fundraising processes - GenAI could help with preparing a first draft of fundraising bids (although it should never be relied on to write such bids alone) and to help with generating ideas for fundraising strategies. 
  • Enhanced marketing - Charities could use AI to help draft and strategise communications in order to maximise engagement with their campaigns.  
  • Data analysis - GenAI could help with the analysis of data obtained during research or campaigns, which in turn could help charities efficiently measure their impact and help with taking data-driven decisions.  
  • Time saving - From automating low risk written content, to engaging chatbots to relieve pressure on staff, AI could take on basic but time-consuming tasks. In turn this could alleviate pressures on busy staff and free up time to deliver other strategic or more complex operational issues.  

However, when considering using GenAI tools for the above tasks, charities should be conscious of the potential risks that arise and ensure that they take appropriate steps to comply with existing laws as set out in the next section.  

Key risks 

Whilst GenAI provides exciting opportunities for charities, using it requires careful planning and assessment of risk.  

Even though risk is an everyday part of charitable activity, as covered in the Charity Commission's guidance on charities and risk management, charity trustees should regularly review and assess the risk faced by their charity in all areas of its work and plan for the management of those risks. Risks are likely to increase depending on the level of use and investment into GenAI. 

In particular, charities should be mindful of:  

  • Furthering the charity's purposes - For each charity trustee, their overriding duty (or guiding principle) is to promote the charity's purposes. They might choose to adopt GenAI to either directly further those purposes or to bring about efficiencies in certain services which ultimately frees up more time and resource to do more for what their charity was set up for. Charity trustees also have a duty to manage the charity's resources responsibly, which covers aspects like mitigating risks and only using their resources to further the charitable purposes. Understanding how those duties would play out in practice should be central to the use of GenAI, with this decision kept under review to ensure that its adoption continues to further the charity's purposes.  
  • Compliance with data protection laws - Where processing of personal data is performed by, or indeed in the creation of, a GenAI System, the person building and/or operating the system will be directly subject (in the UK) to the UK GDPR, including its core requirements for fairness, lawfulness and accountability. There are regulatory investigations and legal claims on foot in the EU and the US potentially striking at the heart of these issues, although – as yet – the regulatory and litigation risks in the UK seem less prominent. Furthermore, there is a general prohibition on solely automated processing which results in a decision that has legal effects on or significantly affects a data subject. This prohibition can be avoided in certain strictly proscribed situations, but there may still need to be provision for human intervention or review. In any case, undoubtedly the use of GenAI for such purposes would necessitate a "data protection impact assessment" (or "DPIA"), under Article 35 of the UK GDPR. The Information Commissioner's Office ("ICO") has made clear that the types of "innovative technologies" which can present a high risk to data subjects (and thus require a DPIA) includes "artificial intelligence, machine learning and deep learning".  

    Some of the other risks discussed below will also, inevitably, raise data protection issues, particularly around biased or discriminatory outcomes, and around accuracy. 
  • IP infringement risk - There is uncertainty about ownership of any IP rights that may arise in AI-generated works, and also around the potentially infringing use by the developers of GenAI tools in using unlicensed content in the training of their models. For users of GenAI tools, such as charities, there are also risks in relation to potentially infringing outputs. The issues around training data are already being litigated in a number of jurisdictions including the US, the UK and Germany, and will also include an assessment of alleged infringing outputs. Charities should be mindful of the intellectual property complexities which arise on using the outputs of AI-generated content and ensure that they put in place policies that take account of any potential right of third-party IP rights or breaches of terms and conditions of use of third-party content. We discuss these issues in more detail in our Client Guide on GenAI and IP, and we are also closely monitoring developments in the case law and at a policy level in our GenAI and IP tracker.  
  • Bias and/or discrimination - GenAI systems may reflect human bias in several ways, including in the data sets that the GenAI system is trained on, or the design and development of the algorithm itself, for example. Bias can occur at any point in a GenAI system's lifecycle, and could lead to GenAI systems reaching potentially discriminatory outputs in real world situations. For example: 
  • Charities' inadvertent use of biased and potentially discriminatory GenAI systems could risk breaching the Equality Act 2010 (if it results in unlawful discrimination against individuals based on their protected characteristics), and in turn give rise to trustees breaching their duty to comply with the law. Biased systems could also give rise to reputational damage and may result in regulatory scrutiny. See our article ("Addressing bias in AI systems through the AI Act") for more on this issue and a breakdown of how the EU AI Act incorporates steps to help mitigate the risk of AI bias. 
  • Hallucinations and accuracy - A significant risk associated with the use of GenAI is the potential for 'hallucinations' or the generation of false or misleading information. Unlike human judgement, which is capable of discerning nuance and broader context, GenAI systems may create content that sounds plausible but is actually inaccurate, an amalgamation of true and false elements, or entirely fabricated. This can be particularly problematic when charities rely on AI for data analysis, creating content, or making decisions that could impact their beneficiaries or reputation. To mitigate these risks, it is crucial for charities to implement robust validation processes, ensuring that any AI-generated content or decisions are subject to rigorous human review. This safeguarding step is essential to maintain the integrity of the charity's work and uphold the trust of stakeholders. 

How to mitigate risks  

Each charity using, or considering using, GenAI tools should think about the specific benefits and risks applicable to its operations and activities. Undertaking a comprehensive risk assessment that should be reviewed regularly would be beneficial. It should also help trustees to better understand how to take more informed risks in this dynamically changing area. 

Whilst charity trustees are not expected to be experts in GenAI, taking advice from experts can help to demonstrate compliance with their legal duties. Trustees should also remember that whilst AI-generated content could help with making more informed decision making and facilitate completion of certain services and tasks, trustees are ultimately responsible for all decisions that relate to the management of the charity.  

Some key ways to stay ahead of the curve, whilst helping to mitigate risks to your charity include: 

  • Instilling "AI compliance by design" through governance and accountability. Clearly define the use-case for adopting GenAI in each specific instance and assess whether it is necessary, proportionate and in line with the charity's purposes to use GenAI or whether an alternative should be sought. Implementing governance processes to this effect (such as a GenAI usage policy, see below) will help to ensure that the relevant factors are considered before decisions are made. Trustees are unlikely to be able to fully automate their systems whilst complying with their duties – governance structures should be in place to guide them as to which decisions can and cannot be delegated to GenAI, and proper records should be kept to show they can or have complied with their duties, particularly their duty of prudence, duty to act with reasonable skill and care, and of course taking into account all relevant factors when making decisions as trustees. 
  • Checking the terms of use of the GenAI solution. Before using a GenAI solution, you should read the terms and conditions that apply to its use. These contracts are often quite supplier-friendly and subject to broad disclaimers and wide carve-outs from supplier-provided indemnities, which should be carefully considered prior to accepting any terms.  
  • Checking the data protection, security and secondary use of "user data". Data provided by your charity to the GenAI model (for example, in prompts) could end up being used for training purposes. If this includes personal data, this could pose a major concern whereby your charity's and/or employees' confidential information could be used to respond to a third party's prompt, exposing it to the public. Internal operational measures should be taken to avoid personal data or confidential information from being accidentally disclosed.  
  • Putting a GenAI usage policy in place and keeping it under regular review. This would typically cover aspects such as: 
    • who may use GenAI in the charity, and when it may or may not be used;  
    • the requirement for human oversight and regular review of automated decisions, including checking for hallucinations, accuracy of outputs and plagiarism of third party content;  
    • adopting "data protection by design and default", involving DPIAs and refraining from inputting personal data into a GenAI system;  
    • confidentiality, including refraining from inputting charity confidential information;  
    • guidance on inputting third party material into the GenAI system; 
    • clear signposting if works generated by AI are being used; and  
    • some form of statement outlining how the charity proposes to minimise or eliminate any biases within their GenAI use. 

Charities may also wish to consider whether other policies or documents should make reference to the charity's use of GenAI.  

Whilst it is exciting to see charities benefit from the transformative potential of GenAI, it should only be used responsibly in a way that furthers the charity's purpose. Ultimately trustees are accountable for their decisions and can only delegate certain aspects of their roles and responsibilities with relevant safeguards in place. The interaction between facilitation and substituting decision making responsibilities can be finely balanced and trustees should think carefully about the risks involved and consider seeking specialist advice. 

If you would like details on how we might be able to support your charity with putting a GenAI usage policy in place and with mitigating the above risks, please contact Kieran John (from our Charities & Social Ventures team) and Anne Rose (an Artificial Intelligence and Machine Learning Lawyer).  

You might also like to visit our AI resource centre for our suite of key resources, practical handbooks and guides (such as "Generative AI: Key risk and mitigation steps for businesses") that cover the legal implications of using GenAI and the practical steps to make the most of the opportunities. 

How can we help you?
Help

How can we help you?

Subscribe: I'd like to keep in touch

If your enquiry is urgent please call +44 20 3321 7000

Crisis Hotline

I'm a client

I'm looking for advice

Something else