The AI Safety Summit held at Bletchley Park between 1 and 2 November 2023 has officially concluded, culminating in the signature of the world-first Bletchley Park Declaration by 28 world leaders in attendance, and a general agreement that an international, cooperative approach should be adopted to manage AI risk. Four key risks were discussed (one being the challenges associated with AI misuse, which we discussed in our article 'Using Artificial Intelligence in Cybersecurity Technologies: Total Defence or Best Defence?') and four key areas were identified through which change could be effected. Read on to learn more about the risks discussed and the actions deemed necessary by the delegates.
Risks
Risks to global safety from frontier AI misuse
The latest AI systems, such as GPT4, can be used to cause harm, with the potential for disruption activities to extend far beyond traditional concerns, such as phishing attacks. Given the potential for AI technologies to be used in the development of destructive instruments like biological weapons, they present a risk to physical as well data, economic and social safety. Current safeguards are unlikely to be suitable given the speed of advancement. Improving the understanding of AI tools, engaging in large-scale collaboration efforts between developers, Governments, academics and researchers, and conducting rigorous testing prior to release will be required.
Risks from unpredictable advances in frontier AI capability
AI's success is largely as a result of its ability to analyse large datasets at inhuman speeds, creating a series of 'neural networks' to recognise patterns and behaviours in input data to perform specified tasks, such as identifying abnormalities in user behaviours to determine the existence of a threat. As the neural networks grow due to dataset expansion, the risk of deviation increases which, if this deviation is significant, could present risks which outweigh its benefits in industries such as science or healthcare. Monitoring of AI tools will, therefore, be paramount in capturing these deviations before they become a problem.
Risks from loss of control over frontier AI
The assumption that AI tools can be used to automate tasks without the requirement for human input and monitoring is common, but incorrect. Failure to put human controls in place increases the risk of weaknesses being exposed and hallucinations becoming the norm. Over time, we may see these issues be better mitigated, however machines should not be relied upon to make certain decisions and human controls should be a requirement. Compliance regimes should be implemented to monitor use cases and slow development to minimise the risk of malicious misuse.
Risks from the integration of frontier AI into society
AI has become an integral part of daily life for many on a personal and professional scale which has compromised our privacy and online safety, exposing us to technological crimes. Safety protocols need to be observed and research and testing need to take place to ensure that the tools balance the concepts of risk and safety.
Actions
What should frontier AI developers do to scale capability responsibly?
Traditionally experts have sat in two camps: (a) allow technology to scale at pace; or (b) take active steps to slow its advancement. Both approaches warrant merit and delegates to the AI Safety Summit settled on a risk-based approach. Whilst development is encouraged, it should be performed responsibly, suitable compliance regimes should be in place, and secure by design principles adhered to. Current practices at developer level are not sufficient, so stringent protocols implemented at Government level will need to be observed.
What should national policy makers do in relation to the risk and opportunities of AI?
Focusing on current issues alone is insufficient, and a forward-thinking approach is required to ensure that the benefit of AI tools can be balanced against the risk whilst keeping up with the speed at which development advances. Whilst regulation is often seen as a hindrance to innovation, policymakers will need to ensure that any legislative or regulatory governance takes into account AI's borderless nature and the risk of siloed regulation impeding safe and responsible development by disproportionately balancing individuals' rights against progress. A global framework should assist individual nations in ensuring cohesion across the board at policy level.
What should the international community do in relation to the risks and opportunities of AI?
Inconsistent approaches to safeguarding against AI risk are likely to stifle development and could result in providers withdrawing services or local presence, having a knock-on effect on the local economy and choice. Over the course of the next 12 months the challenge will be for the international community to:
- collaborate and develop a shared understanding of capability and risk;
- coordinate approaches to streamline, research and evaluate AI;
- narrow the global inequality gap through international collaborations; and
- ensure national objectives and initiatives are complementary and focussed as opposed to adversarial.
What should the scientific community do in relation to the risks and opportunities of AI?
Manipulation of AI capabilities by bad actors remains a significant threat, resulting in calls for a multi-disciplinary international effort to consider the potential impact. Features such as non-removable "off" switches, robust architectures, and input of controls to limit deviations and corruptions will need to be considered. Testing protocols will need to be designed and recommended so that policymakers can be confident in applying the parameters to determining whether or not a product is safe to market.