(96)
|
In order to efficiently ensure that fundamental rights are protected,
deployers of high-risk AI systems that are bodies governed by public law, or private entities
providing public services and deployers of certain high-risk AI systems listed in an annex to
this Regulation, such as banking or insurance entities, should carry out a fundamental
rights impact assessment prior to putting it into use. Services important for individuals that
are of public nature may also be provided by private entities. Private entities providing such
public services are linked to tasks in the public interest such as in the areas of education,
healthcare, social services, housing, administration of justice. The aim of the fundamental
rights impact assessment is for the deployer to identify the specific risks to the rights of
individuals or groups of individuals likely to be affected, identify measures to be taken in the
case of a materialisation of those risks. The impact assessment should be performed prior
to deploying the high-risk AI system, and should be updated when the deployer considers that any
of the relevant factors have changed. The impact assessment should identify the deployer’s
relevant processes in which the high-risk AI system will be used in line with its intended
purpose, and should include a description of the period of time and frequency in which the
system is intended to be used as well as of specific categories of natural persons and groups
who are likely to be affected in the specific context of use. The assessment should also include
the identification of specific risks of harm likely to have an impact on the fundamental rights
of those persons or groups. While performing this assessment, the deployer should take into
account information relevant to a proper assessment of the impact, including but not
limited to the information given by the provider of the high-risk AI system in the instructions
for use. In light of the risks identified, deployers should determine measures to be taken in
the case of a materialisation of those risks, including for example governance arrangements
in that specific context of use, such as arrangements for human oversight according to the
instructions of use or, complaint handling and redress procedures, as they could be instrumental
in mitigating risks to fundamental rights in concrete use-cases. After performing that impact
assessment, the deployer should notify the relevant market surveillance authority. Where
appropriate, to collect relevant information necessary to perform the impact assessment,
deployers of high-risk AI system, in particular when AI systems are used in the public sector,
could involve relevant stakeholders, including the representatives of groups of persons likely
to be affected by the AI system, independent experts, and civil society organisations in
conducting such impact assessments and designing measures to be taken in the case of
materialisation of the risks. The European Artificial Intelligence Office (AI Office) should
develop a template for a questionnaire in order to facilitate compliance and reduce
the administrative burden for deployers.
|