Is there an obligation to mark content prepared by AI?

Gist 1

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)

The above statement highlights an obligation for providers to inform individuals when they are interacting with an AI system. However, this doesn’t necessarily extend to content prepared by AI, and the obligation may not apply if the interaction with an AI is clear from context.

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. (Article 52)

Here we see a more direct reference to marking content. Specifically, AI-generated or manipulated content that could falsely appear as authentic or truthfully depicting individuals must be clearly disclosed as being manipulated or artificially generated. It’s important to note that this obligation applies to ‘deep fake’ content only.

The following artificial intelligence practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; (Article 5)

While Article 5 does not directly reference obligations around marking AI-generated content, it does prohibit certain deceptive or manipulative uses of AI, which could include unclosed AI-generated content.

After reviewing Annex III, while it provides important context regarding high-risk AI systems, it doesn’t specify any direct obligations for marking content prepared by AI.

To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. (Recital 47)

From Recital 47, although not explicit in content marking requirements, the text suggests a broad need for transparency in high-risk AI systems, which could include disclosing if something has been prepared or generated by AI.

Access to data of high quality plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. (Recital 44)

While Recital 44 does not address content marking obligations directly, it emphasizes the importance of safety and non-discrimination in AI system practices, which can refer indirectly to responsible management of AI-generated content.

In conclusion, there are specific requirements to mark ‘deep fake’ content but the Act does not appear to have an explicit, broad requirement to mark all content prepared by AI. Other obligations around transparency and prohibitions on using AI for deception could imply a broader requirement, but that would require interpretation and might depend on the context.

Gist 2

Users of an AI system that generates or manipulates content which would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. (Article 52)

This excerpt indicates that if an AI system is used to create or manipulate content that inaccurately appears authentic (a ‘deep fake’), the user of that AI system must disclose this fact. The user is also expected to disclose the identity of the person or organization that created or modified the content, whenever this is possible. The presentation of this disclosure must be in a manner that clearly expresses the inauthenticity of the content to its recipient. Users also need to adhere to the latest recognized technological standards when marking such content.

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)

The beginning part of Article 52 mandates providers of AI systems designed for interaction with humans. As per the mandate, these AI systems must be developed in a way that clearly informs the user, in simple terms, that they are interacting with an AI, unless the context of the interaction makes it obvious that they are dealing with an AI.

After reviewing Annex III, we understand that it primarily focuses on scenarios constituting high-risk AI systems. While it addresses many areas of application, none of them explicitly require marking or distinguishing content prepared by AI.

To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. (Recital 47)

While Recital 47 does stress on the need for transparency, especially with high-risk AI systems, and that the systems’ outputs are understandable, it doesn’t specifically advise marking of content created by AI.

High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. (Recital 47)

The second part of Recital 47 reiterates the importance of pertinent documentation and user instructions. It should also provide clear and succinct information regarding potential risks related to fundamental rights or any possible bias or discrimination. However, this doesn’t directly address or mandate the marking of AI-generated content.

So, based on the analyzed parts of the EU AI Act, there is an obligation to disclose when content has been manipulated or modified by an AI system to appear as authentic, especially in the case of ‘deep fakes’, and when a user is interacting directly with an AI system. However, there doesn’t seem to be a universal requirement to mark all content generated by AI systems.

Gist 3

After a careful examination of the EU AI Act, the following interpretation can be given to answer your question:

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. (Article 52 (3))

From this provision, we can observe that when an AI system generates or manipulates content such as ‘deep fakes’, there is a requirement to disclose that the content was artificially generated or manipulated. This disclosure should ideally include the identity of the entity that generated or manipulated it and needs to be done in a way that is clear and visible for the recipient of that content.

High-risk AI systems shall be accompanied by intelligible instructions for use in an appropriate digital format or made otherwise available in a durable medium that include concise, correct, clear and to the extent possible complete information that helps operating and maintaining the AI system as well as supporting informed decision-making by users and is reasonably relevant, accessible and comprehensible to users. (Article 13 (2))

Under Article 13, operators of high-risk AI systems are required to provide clear and understandable instructions for use and employ transparency as a guiding principle. Although not explicit, this transparency may extend to marking or differentiating AI-generated content, ensuring users have all necessary information to make informed decisions.

Regarding the stipulations found in Annex IV:

(f) where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those products; (Annex IV)

This provision discusses the requirement for marking and clearly illustrating AI systems that are components of products. Though it does not explicitly state the need to mark AI-generated content, it does show an overall requirement for transparency about AI components in products.

In the Recitals:

High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate. (Recital 47)

This statement suggests a general expectation for transparency with high-risk AI systems, including providing appropriate documentation and clear instructions for use. While it does not explicitly state the need to mark AI-generated content, it frames a potential expectation for comprehensive and clear information about a system’s capabilities, which might include content generation.

In conclusion, while the EU AI Act does not explicitly mandate for all AI-generated content to be labeled as such, several provisions highlight the importance of transparency and informed decision-making for users. This could imply a best practice of marking AI-generated content, even if not explicitly required. Additionally, in the case of deep fakes, disclosure of AI generation is clearly mandated. Therefore, it is advisable to consider both the specific context and the general guiding principles of transparency and user-awareness in the Act when determining obligations around marking AI-generated content.

Gist 4

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. (Article 52)

From this extract, it’s clear that the EU AI Act does impose obligations to mark or label certain types of AI-generated content. Specifically, this obligation applies to AI-generated content that creates false depictions of individuals saying or doing things they did not actually say or do, known as ‘deep fakes’. Under these circumstances, the Act requires this type of AI-generated content to be clearly labelled as artificially generated or manipulated.

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)

In the context of AI systems designed to interact with people, the EU AI Act imposes an obligation to disclose the fact that users are interacting with an AI system. This disclosure must be timely, clear, and intelligible. However, there is no explicit requirement in the Act requiring all content generated by AI systems to be marked or labelled as such.

Due to the nature of the EU AI Act, the obligation to mark content prepared by AI seems to be specific rather than general. For instance, the regulation specifically emphasizes the obligation of marking AI-generated ‘deep fakes’, but it doesn’t explicitly cover other types of AI-generated content. Annex III which deals with high-risk AI systems also does not include specific obligations to mark content generated by these systems. Therefore, it appears that the obligation to mark content prepared by AI may not be universally applicable and would depend on the specific circumstances and the context of use. However, please note that EU members might have additional requirements or guidelines about marking AI-generated content.