(65)
|
The risk-management system should consist of a continuous, iterative
process that is planned and run throughout the entire lifecycle of a high-risk AI system.
That process should be aimed at identifying and mitigating the relevant risks of AI systems on
health, safety and fundamental rights. The risk-management system should be regularly reviewed
and updated to ensure its continuing effectiveness, as well as justification and documentation
of any significant decisions and actions taken subject to this Regulation. This process should
ensure that the provider identifies risks or adverse impacts and implements mitigation measures
for the known and reasonably foreseeable risks of AI systems to the health, safety and
fundamental rights in light of their intended purpose and reasonably foreseeable misuse,
including the possible risks arising from the interaction between the AI system and the
environment within which it operates. The risk-management system should adopt the most
appropriate risk-management measures in light of the state of the art in AI. When identifying
the most appropriate risk-management measures, the provider should document and explain the
choices made and, when relevant, involve experts and external stakeholders. In identifying the
reasonably foreseeable misuse of high-risk AI systems, the provider should cover uses of AI
systems which, while not directly covered by the intended purpose and provided for in the
instruction for use may nevertheless be reasonably expected to result from readily predictable
human behaviour in the context of the specific characteristics and use of a particular AI
system. Any known or foreseeable circumstances related to the use of the high-risk AI system in
accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which
may lead to risks to the health and safety or fundamental rights should be included in the
instructions for use that are provided by the provider. This is to ensure that the deployer is
aware and takes them into account when using the high-risk AI system. Identifying and
implementing risk mitigation measures for foreseeable misuse under this Regulation should not
require specific additional training for the high-risk AI system by the provider to address
foreseeable misuse. The providers however are encouraged to consider such additional training
measures to mitigate reasonable foreseeable misuses as necessary and appropriate.
|