(134)
|
Further to the technical solutions employed by the providers of the AI system,
deployers who use an AI system to generate or manipulate image, audio or video content that
appreciably resembles existing persons, objects, places, entities or events and would falsely
appear to a person to be authentic or truthful (deep fakes), should also clearly and
distinguishably disclose that the content has been artificially created or manipulated by
labelling the AI output accordingly and disclosing its artificial origin. Compliance with this
transparency obligation should not be interpreted as indicating that the use of the AI system or
its output impedes the right to freedom of expression and the right to freedom of the arts and
sciences guaranteed in the Charter, in particular where the content is part of an evidently
creative, satirical, artistic, fictional or analogous work or programme, subject to appropriate
safeguards for the rights and freedoms of third parties. In those cases, the transparency
obligation for deep fakes set out in this Regulation is limited to disclosure of the existence
of such generated or manipulated content in an appropriate manner that does not hamper the
display or enjoyment of the work, including its normal exploitation and use, while maintaining
the utility and quality of the work. In addition, it is also appropriate to envisage
a similar disclosure obligation in relation to AI-generated or manipulated text to the
extent it is published with the purpose of informing the public on matters of public interest
unless the AI-generated content has undergone a process of human review or editorial
control and a natural or legal person holds editorial responsibility for the publication of
the content.
|