Transparency Requirements for Low-Risk AI Systems under the AI Act


How is transparency defined in the AI Act and what transparency requirements apply to low-risk Ai systems?

Executive Summary

In response to the AI Act’s transparency requirements for low-risk AI systems, here’s a concise summary identifying the key legal considerations for entrepreneurs:

  • Definition of Transparency: The AI Act establishes transparency as a fundamental attribute across all AI system classifications, emphasizing the importance of clear user communication, particularly for systems impacting rights and well-being.
  • Transparency Requirements: All AI systems, including low-risk ones, must:
    • Reveal their AI nature during direct interaction with users.
    • Disclose when content is AI-generated or manipulated, ensuring users are aware of its artificial origin.
    • Adopt good practices in data management, enhancing trust and compliance.
  • Transparency as a Best Practice: While specific transparency obligations are mandated for high-risk AI systems, maintaining logs and transparent data governance are considered beneficial across all AI systems, contributing to overall accountability and ethical use.

By incorporating these transparency principles into your AI-driven business models, you can align with the AI Act’s expectations and foster responsible innovation in the EU market.


  1. ‘Low-risk’ Definition: For the purpose of this analysis, we will assume “low-risk” AI systems are those not explicitly classified as high-risk within Annex III of the AI Act or outlined in Article 5’s prohibitions.

  2. Scope of ‘Transparency’: We will assume ‘transparency’ refers to all aspects relevant to the regulation—development, data gathering, algorithmic decision-making, and user interaction.

  3. Applicability of EU AI Act: Given the context, we will assume that the AI systems in question fall within the scope of the EU AI Act and are intended for use within the European Union.

Legal trace

Understanding Transparency in the Context of the AI Act

(39) AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons… Recital 39

This recital emphasizes the essential role of transparency in AI systems, particularly those used in contexts that significantly impact individuals’ rights and livelihoods. It highlights that transparency is not solely a technical feature but a cornerstone principle needed to safeguard fundamental rights. This notion applies also to low-risk AI systems, as the same need for clarity and openness holds across all AI applications to ensure fairness and the protection of users’ rights.

Artificial intelligence is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space for experimentation… Recital 71

While not explicitly mentioning transparency, Recital 71 implies the necessity of clear and responsible innovation under regulatory oversight, suggesting that transparency underpins the development and use of AI technologies, even those regarded as low-risk. This consideration for transparency is rooted in the need for a controlled environment where experimentation with AI can be monitored and evaluated safely, fostering a culture of responsible innovation.

The Classification of AI Systems and Low-risk Differentiation

  1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to…that AI system shall be considered high-risk where both of the following conditions are fulfilled… Article 6(1)

Article 6 establishes the criteria for classifying an AI system as high-risk, specifically focusing on its intended use and the requirements for conformity assessments. Systems not fulfilling these conditions are considered lower-risk by exclusion, which implies a relative distinction in the level of transparency requirements compared to high-risk systems.

  1. In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk… Article 6(2)

By detailing the types of systems that are considered high-risk due to their significant risk potential, Article 6 implicitly differentiates low-risk systems. These are systems not associated with high-risk areas or significant harm, thus suggesting a tiered approach to transparency obligations within the regulatory framework.

Transparency Obligations Across AI Systems

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system…informs the natural person…that they are interacting with an AI system… Article 52(1)

Article 52 sets the baseline for transparency, mandating that AI systems, irrespective of their risk classification, inform users about the AI interaction. This includes low-risk systems, which must also abide by the principles of clear, timely, and intelligible disclosure of information to the users, promoting transparency as an innate aspect of AI system development.

Specific Transparency Considerations for Low-risk AI Systems

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic… shall disclose… that the content has been artificially generated or manipulated… Article 52(3)

Article 52 extends specific transparency requirements to content-generating AI systems, which could include low-risk applications. These stipulations ensure users are informed about the artificial nature of such content, highlighting a particularized transparency directive that can apply to AI systems across the risk spectrum, including those with lower risk profiles.

Training, validation and testing data sets shall be subject to data governance appropriate for the context of use…, Article 10(2)(a)(a)

While Article 10 concerns high-risk AI systems, the principles of data governance and transparency in data handling are good practices that apply broadly. Transparency regarding data collection purposes and processing methods enhances trust and ethical use, guiding the development and deployment of lower-risk AI systems as well.

Providers of high-risk AI systems shall…keep the logs automatically generated by their high-risk AI systems… Article 16(d)

The emphasis on record-keeping in Article 16, while specific to high-risk systems, reflects a broader principle of transparency within AI system operations suitable for all AI applications, including low-risk. Maintaining logs supports transparency by enabling accountability and auditability, which is advantageous for any AI deployment to demonstrate responsible use and compliance with regulatory norms.

In conclusion, the AI Act cautions to transparency as an overarching theme applied to AI technologies, threading through the fundamental requirements and differentiated risk-based classifications. For low-risk systems, transparency is infused into the precepts of development, interaction, content dissemination, data governance, and operational practices, ensuring that users are well-informed and protected in their AI engagements.