Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. (Article 52.3)
The users of a deepfake AI app or website are required to clearly disclose and label that the content has been manipulated or artificially generated. When possible, the users should provide the identity of the person or legal entity behind the generation or manipulation of the content. The disclosure should be done so that any recipient or viewer can easily see and understand that the content is not real or authentic.
Paragraph 3 shall not apply where the use of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. (Article 52.3a)
The disclosure obligation, however, does not apply when the use of the AI system is allowed by law or is necessitated by the exercise of freedom of expression and the arts as guaranteed by the EU Charter of Fundamental Rights. But this exception is still subject to safeguards for the rights and freedoms of third parties.
The following artificial intelligence practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; (Article 5.1(a))
According to Article 5, certain uses of AI are indeed prohibited. For instance, any AI system, including a deepfake app or website that uses subliminal tactics or intentionally deceptive techniques to significantly impair a person’s or group’s informed decision-making process or leads to decisions that could cause significant harm is strictly prohibited.
Lastly, while the high-risk classification of deepfakes under the EU AI Act is subject to debate, there are no explicit user obligations outlined in Annex III for deepfake technologies. However, should they be classified as high-risk, their providers would be subject to stringent requirements in terms of risk assessments, risk management systems, quality management, and transparency. As for users, while the obligation for full disclosure applies, the Act doesn’t impose more specific obligations, particularly, on individual users when it concerns non-commercial, personal use of deepfakes.
These obligations and prohibitions are put in place to protect individuals and groups from the potentially harmful effects of AI systems, particularly those that can manipulate content like deepfake apps and websites. The use of these systems must be transparent, clear and should not undermine individual decision-making abilities or lead to significant harm.
Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it (Article 52)
This implies that any user of a deepfake application or website, under the EU AI ACT, is required to clearly disclose if the content was artificially generated or manipulated. They need to ensure that such information is provided in an appropriate, timely, and visible manner. Where possible, the name of the creator or manipulator of the content should also be provided. This is to ensure that all parties interacting with the deepfake content are aware of its nature and can make informed decisions based on this knowledge.
Disclosure shall mean labeling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. (Article 52)
Furthermore, the user of such applications or websites should ensure that the content is properly labeled to indicate its inauthentic nature. This should be done in a way that it is clearly visible to all recipients of the content. In doing so, users should take into consideration the best practices and standards applicable in this regard.
Paragraph 3 shall not apply where the use of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. (Article 52)
However, it is worth noting that the above-mentioned obligations are not applicable to the use of AI systems for generating or manipulating content, where such use is either authorized by law or is justified under the right to freedom of expression and arts, as outlined in the Charter of Fundamental Rights of the EU. This, of course, would be case-specific and subject to the condition that the rights and freedoms of all third parties are appropriately safeguarded.
Such requirements on transparency and on the explicability of AI decision-making should also help to counter the deterrent effects of digital asymmetry and so-called ‘dark patterns’ targeting individuals and their informed consent. (Recital 47a)
This statement from Recital 47a suggests that the EU AI Act demands transparency and explicability in AI decision-making from users of AI applications such as deepfake apps and websites. These users are required to counter the deterrent effects of digital asymmetry and ‘dark patterns’ that can mislead individuals and undermine their informed consent.
Although the recital does not explicitly mention deepfake apps and websites, the general principle applies to them as well. This is interpreted as users must prioritize transparency in their AI decision-making, meaning they must clearly explain how applications function, make their operations visible, and be able to justify or account for their AI’s decisions.
Furthermore, users should also ensure that they are not engaging in activities that could mislead individuals or obscure the fact they are interacting with an AI system rather than a human. This includes not deploying ‘dark patterns’ that can trick individuals into unwittingly giving their consent.
In a nutshell, users of deepfake apps and websites are expected to act transparently and responsibly, in a manner that ensures respect for the users’ informed consent.
Your obligations might also include compliance with the General Data Protection Regulation (GDPR) and other relevant laws. It’s recommended to consult a legal advisor to ensure full compliance with all the regulations in the EU.
Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. (Article 52)
This clause from Article 52 directly outlines an important obligation, making it clear that users of deepfake applications and websites must reveal when content is artificially created or manipulated. Also, where possible, it is a responsibility on the user’s part to disclose the individual or entity behind the generation or alteration of such content.
The information referred to in paragraphs 1 to 3 shall be provided to the natural persons at the latest at the time of the first interaction or exposure. It shall be accessible to vulnerable persons, such as persons with disabilities or children, complete, where relevant and appropriate, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications. (Article 52)
Article 52 further elaborates that this disclosure obligation must be fulfilled at the first interaction or exposure. It also emphasizes that the information needs to be made available to all individuals, particularly vulnerable groups like persons with disabilities or minors. Therefore, an additional layer of responsibility comes into play for users in terms of the timing, scope, and target audience of the disclosure.
The following artificial intelligence practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques…; the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces; the placing on the market, putting into service or use of an AI system for making risk assessments of natural persons or groups thereof in order to assess the risk of a natural person for offending or reoffending or for predicting the occurrence or reoccurrence of an actual or potential criminal or administrative offence based on profiling of a natural person or on assessing personality traits and characteristics, including the person’s location, or past criminal behaviour of natural persons or groups of natural persons. (Article 5)
Although not directly referencing deepfakes, Article 5 signals that users of deepfake apps and websites may be bound by prohibitions against certain deceptive and manipulative practices. The EU AI Act clearly takes a strong stance against AI usage that could potentially harm personal rights or influence a person’s free will and illegal risk assessments or biases.
To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. (Recital 47)
Recital 47 suggests that users have an implied obligation to use AI systems such as deepfake applications correctly, with reliance on provided instructions and understanding of disclosed risks. It implies that users have a responsibility to consider these risks, especially those pertaining to fundamental rights and discrimination.
It is important to note that AI systems should make best efforts to respect general principles establishing a high-level framework that promotes a coherent human-centric approach to ethical and trustworthy AI in line with the Charter of Fundamental Rights of the European Union and the values on which the Union is founded, including the protection of fundamental rights, human agency and oversight, technical robustness and safety, privacy and data governance, transparency, non-discrimination and fairness and societal and environmental wellbeing. (Recital 9a)
Recital 9a outlines the general principle that users must strive to adhere to all AI systems, including deepfake apps and websites. This means their usage should align with the protection of fundamental rights, privacy, data governance, transparency, and other factors listed. Therefore, while these services might be used for various purposes, the usage must not violate these principles.
In accordance with Article 114(2) TFEU, this Regulation complements and should not undermine the rights and interests of employed persons. This Regulation should therefore not affect Union law on social policy and national labour law and practice, that is any legal and contractual provision concerning employment conditions, working conditions, including health and safety at work and the relationship between employers and workers, including information, consultation and participation. This Regulation should not affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or freedom to strike or to take other action covered by the specific industrial relations systems in Member States, in accordance with national law and/or practice. Nor should it affect concertation practices, the right to negotiate, to conclude and enforce collective agreement or to take collective action in accordance with national law and/or practice. It should in any event not prevent the Commission from proposing specific legislation on the rights and freedoms of workers affected by AI systems. (Recital 2d)
Although Recital 2d primarily pertains to the rights of employees, it broadly underpins that the operation of AI systems, including deepfake apps, should not undermine any rights, including the right to information. Consequently, users typically have an obligation, when sharing or using deepfakes, to observe the rights of individuals involved, especially if the AI systems are applied in an employment context.
Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. (Article 52)
The obligations for users of deepfake apps and websites under the EU AI Act are clearly outlined in Article 52. Users are required to disclose when content has been artificially created or manipulated in a clear, timely, and visible manner. It’s also required, if possible, to disclose the identity of the entity or person that created or manipulated the material. Users are thus obliged to label or tag their content that it’s been manipulated or artificially generated in a way that its inauthenticity is clear for anyone who sees it.
Paragraph 3 shall not apply where the use of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in paragraph 3 are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant. (Article 52)
However, according to the same article, there are exceptions to these obligations. If the use of an AI system that generates manipulated content is authorized by law or necessitates the exercise of the right of freedom of expression, arts, or sciences, the obligation to disclose may not be necessary. This roughly implies that for creative, satirical, artistic, or fictional work, users are only required to label the content as manipulated without interfering with the way the work is displayed.
‘deep fake’ means manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, produced using AI techniques, including machine learning and deep learning; (Article 3)
Article 3 precisely gives the definition of deepfake in the context of the EU AI Act, referring to manipulated or synthetic audio, image, or video content produced using AI techniques that give the false impression of people saying or doing things they didn’t.
‘operator’ means the provider, the deployer, the authorised representative, the importer and the distributor; (Article 3)
Further, according to Article 3, the definition of ‘operator’ is quite broad and includes anyone from provider to distributor. This means depending on their specific role within the system, each could have different responsibilities.
AI systems intended to influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. (Annex III - 8(aa))
Annex III of the AI Act could also indirectly relate to deepfakes usage. If the content is created with the intent to influence the outcome of elections or referendums or alter the voting behavior, it might fall under this section.
AI systems intended to be used by social media platforms that have been designated as very large online platforms within the meaning of Article 33 of Regulation EU 2022/2065, in their recommender systems to recommend to the recipient of the service user-generated content available on the platform. (Annex III - 8(ab))
Furthermore, if the deepfake content is distributed or recommended on large social media platforms, it might fall under this section.
Such requirements on transparency and on the explicability of AI decision-making should also help to counter the deterrent effects of digital asymmetry and so-called ‘dark patterns’ targeting individuals and their informed consent. (Recital 47a)
Lastly, Recital 47a emphasizes the importance of transparency in AI decision-making processes, especially critical for deepfake apps and websites. The key obligation remains that users of such tools should abide by transparency and explicability. This helps to curb ‘digital asymmetry’ where disproportionate power lies with the AI developers or corporations compared to ordinary users. Also, this prevents ‘dark patterns’, manipulative design elements that deceive or pressure users along the lines not in their best interest.
In conclusion, the EU AI ACT mandates the users of deepfake apps and websites to disclose when their content is artificially generated or manipulated and label it clearly. Nonetheless, there are exceptions in terms of content usage for freedom of expression, arts, and sciences or when it forms part of evidently creative or fictional work.