(61)
|
Certain AI systems intended for the administration of justice and democratic
processes should be classified as high-risk, considering their potentially significant impact on
democracy, the rule of law, individual freedoms as well as the right to an effective remedy and
to a fair trial. In particular, to address the risks of potential biases, errors and
opacity, it is appropriate to qualify as high-risk AI systems intended to be used by
a judicial authority or on its behalf to assist judicial authorities in researching and
interpreting facts and the law and in applying the law to a concrete set of facts. AI
systems intended to be used by alternative dispute resolution bodies for those purposes should
also be considered to be high-risk when the outcomes of the alternative dispute resolution
proceedings produce legal effects for the parties. The use of AI tools can support the
decision-making power of judges or judicial independence, but should not replace it: the final
decision-making must remain a human-driven activity. The classification of AI systems as
high-risk should not, however, extend to AI systems intended for purely ancillary administrative
activities that do not affect the actual administration of justice in individual cases, such as
anonymisation or pseudonymisation of judicial decisions, documents or data, communication
between personnel, administrative tasks.
|