(54)
|
As biometric data constitutes a special category of personal data, it is
appropriate to classify as high-risk several critical-use cases of biometric systems, insofar as
their use is permitted under relevant Union and national law. Technical inaccuracies of AI
systems intended for the remote biometric identification of natural persons can lead to biased
results and entail discriminatory effects. The risk of such biased results and discriminatory
effects is particularly relevant with regard to age, ethnicity, race, sex or disabilities.
Remote biometric identification systems should therefore be classified as high-risk in view of
the risks that they pose. Such a classification excludes AI systems intended to be used for
biometric verification, including authentication, the sole purpose of which is to confirm that
a specific natural person is who that person claims to be and to confirm the identity of
a natural person for the sole purpose of having access to a service, unlocking
a device or having secure access to premises. In addition, AI systems intended to be used
for biometric categorisation according to sensitive attributes or characteristics protected
under Article 9(1) of Regulation (EU) 2016/679 on the basis of biometric data, in so far as
these are not prohibited under this Regulation, and emotion recognition systems that are not
prohibited under this Regulation, should be classified as high-risk. Biometric systems which are
intended to be used solely for the purpose of enabling cybersecurity and personal data
protection measures should not be considered to be high-risk AI systems.
|