Mishcon de Reya page structure
Site header
Menu
Main content section

EU AI ACT & UK AI Principles

Navigator

Principle 4 Accountability and Governance

Technical standards to consider
  • ISO/IEC 38507:2002 – Governance of AI
  • ISO/IEC 25059 – Quality model for AI Systems
  • ISO/IEC DIS 42006 – Requirements for bodies providing audit and certification of artificial intelligence management systems
  • ISO/IEC CD 42005 - AI System impact assessment
Notes
  1. The developer/employer's internal accountability and governance framework should be detailed, specifying who will have ultimate responsibility regarding the AI system. Who within the organisation will be accountable upon the AI System's failure or production of adverse outcomes for its users should be known.
  2. The AI System should be consistent with the ethical principles, values, standards, policies, and/or code of conduct of the operator.
  3. The elements of the training and development "supply chain" that have been outsourced should be known. Third parties should be subject to the same levels of quality control as those of the operator.
  4. What is the relationship between the operator and end users once the AI System developed reaches the market?
  5. The extent to which the AI System relies on third-party data/systems input should be known, along with the accountability of those third-party dependencies.
  6. There should be mechanisms set up to enable end users to comment on the system, and comments should be validated (analysed and monitored).
  7. Standards of good governance should be met.
  8. There should be adequate human oversight:
    1. Humans responsible for troubleshooting triggered by system alerts but will not otherwise oversee system operation.
    2. Humans can configure settings to alert the operator when a certain threshold of failure is reached but otherwise, it is automated for most use.
    3. System outputs subject to human intervention; if none, the system proceeds.
    4. System outputs require human sign-off/active approval before any action based on them is taken.
  9. It should be possible for outputs to be disregarded, overridden, or reversed.
  10. Developers and employers should assess whether these measures enable users to understand the relevant capacities of the AI system, to monitor its operation to identify and remedy anomalies and unexpected performance, and to be aware of the risks of over-relying on the AI system's outputs.