what requirements do I have to fulfill to lauch a generative ai system under the AI Act?

Gist 1

Let’s begin by looking at the obligations you, as a provider, would have when dealing with what the AI Act classifies as a “foundation model”. This is quite likely the case for a generative AI system.

A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licences, as a service, as well as other distribution channels. (Article 28b)

This means that prior to the launch of our substantial generative AI system, you’re required to certify its compliance with the stipulations outlined in the Act. These regulations apply regardless of the model’s distribution mode, whether as a standalone product, or as part of another product or system.

Article 28b goes on to provide details on the specific obligations you must fulfill:

For the purpose of paragraph 1, the provider of a foundation model shall… (Article 28b)

These requirements include:

  • The identification, reduction, and mitigation of any foreseeable risks, as demonstrated through testing, design, and analysis.
  • Only using datasets that have undergone proper data governance processes.
  • Ensuring that the foundation model meets certain performance, predictive, interpretative, safety, and correctness standards.
  • Documentation of the foundation model’s technical details and instruction manual.
  • Registering the model with the EU database.
  • Setting up a quality management system to ensure ongoing compliance.

For our generative AI, there are additional obligations:

Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition (Article 28b)

You would need to ensure:

  • Compliance with transparency obligations outlined in Article 52 (1).
  • The employment of safeguards against harmful content, where applicable, while training and designing the model.
  • Open documentation of the copyrighted training data used.

Moving onto your transparency obligations:

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself, or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear, and intelligible manner unless this is obvious from the circumstances and the context of use. (Article 52)

This explains that users of your AI system must be informed about their interaction with an AI system unless, given the context, this is evident.

Next, let’s look at requirements based on potential use cases of your generative AI system, as outlined in Annex III.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III, Point 4a)

If you’re planning to use your generative AI for activities related to recruitment and selection, such as job ad placements, application screenings, or candidate evaluations, be aware that there are specific requirements, with a particular focus on transparency, accountability, and data governance.

AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III, Point 4b)

Furthermore, if your AI system is intended to make decisions or significantly influence areas such as contract management, task allocation based on individual behaviour or characteristics, or performance monitoring in a work environment, it will be subject to the same regulations.

AI systems intended to be used for the purpose of determining access or materially influence decisions on admission or assigning natural persons to educational and vocational training institutions; (Annex III, Point 3a)

Finally, if your generative AI system might be used to decide access or influence admission decisions for educational or vocational training institutions, the same transparency, accountability, and data governance principles apply.

It’s clear that legal compliance for your generative AI system is nuanced and varies based on its intended usage. Consider conducting a risk assessment of the system and, wherever necessary, consult a legal expert specializing in EU legislation.

Gist 2

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)

If you’re launching a generative AI system, you should ensure that the individuals interacting with your system are informed that they are dealing with an AI. The information should be delivered clearly and on time unless it’s obvious that the interaction is with an AI system.

Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. (Article 52)

If your system generates or manipulates content that could mislead people into believing the content is genuine, you must disclose that the content is artificially generated. Ideally, also provide the source that generated or manipulated the content.

A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open-source licenses, as a service, as well as other distribution channels. (Article 28b - paragraph 1)

Before deploying the generative AI system, confirm that it meets all requirements irrespective of its application, whether it’s standalone, embedded, or distributed as a service.

demonstrate through appropriate design, testing and analysis the identification, the reduction, and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment, and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development (Article 28b - paragraph 2(a))

You should be able to show that potential health and safety risks are identified and mitigated. Respect for fundamental rights and avoidance of harm to the environment or democracy norms should also be demonstrated.

A detailed description of the elements of the AI system and of the process for its development, including: the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider; (Annex IV)

Your system’s development process and elements should be well documented. This should include an explanation of any pre-trained models or tools utilized.

A detailed description of the risk management system in accordance with Article 9; (Annex IV)

You should have a comprehensive risk management system and documentation based on Article 9’s stipulations.

The above requirements are key elements for launching a generative AI system under the proposed EU AI Act. Consult with a legal expert to ensure full compliance with all the stipulations in the regulation.

Gist 3

“‘general purpose AI system’ means an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed;” (Article 3)

Given this definition, a generative AI system, which can generate new content across various applications, is likely to be classified as a ‘general purpose AI system’ under the AI Act.

”High-risk AI systems shall comply with the requirements established in this Chapter.” (Article 8)

This points out that if your generative AI system is classified as a high-risk system, it needs to comply with the requirements stated in this Chapter of the AI Act.

”A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system.” (Article 9)

In this context, if your generative AI system is risk-classified, a comprehensive risk management system covering the entire lifecycle of the AI system is necessary.

”High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application.” (Article 10)

Since generative AI systems typically use machine learning methods and require training, validation, and testing datasets, this requirement is especially relevant. The datasets used must meet the prescribed quality criteria.

”The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to-date.” (Article 11)

This requirement indicates that you need up-to-date technical documentation before and during the operation of the AI system, should it be classified as high-risk.

”AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; […] AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships;” (Annex III: Sections 4a, 4b)

Annex III specifies that if you intend your generative AI system for recruitment, selection, or decision-making purposes affecting work-related contracts, you need to comply with the high-risk AI system requirements under the AI Act.

”A general description of the AI system […]; the nature of data likely or intended to be processed by the system […]; the description of the various configurations and variants of the AI system […]; a detailed and easily intelligible description of the system’s main optimization goal or goals; detailed and easily intelligible instructions for interpreting the system’s output; a detailed description of the elements of the AI system and of the process for its development, including the methods and steps performed for the development of the AI system; detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance; a detailed description of the risk management system in accordance with Article 9; a copy of the EU declaration of conformity; a detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61, including the post-market monitoring plan referred to in Article 61(3).” (Annex IV)

According to Annex IV, launching your generative AI system would involve preparing detailed technical documentation. It must include a general description of the AI system, the nature of data intended to be processed, descriptions of the different configurations, detailed descriptions of the system’s goals, and instructions for interpreting the system’s output. The documentation should provide detailed descriptions of the system’s development steps, its monitoring, functioning and performance, the compliance of its risk management system in accordance with Article 9, a copy of the EU declaration of conformity, and a description of the system in place to evaluate the AI system performance in the post-market phase.

Please consult with legal experts to make sure your generative AI system meets all the requirements listed above, as per the EU AI Act.

Gist 4

“High-risk AI systems shall comply with the requirements established in this Chapter.” (Article 8)

This statement means that if your generative AI system falls under the category of ‘High-risk AI system’, it must comply with the established requirements in the chapter of the AI Act pertinent to matters of safety, transparency, performance, risk management, and other specific obligations under the regulation.

”A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system.” (Article 9)

This implies that you are required to establish a risk management system active throughout the lifecycle of your generative AI system. This system must identify, estimate, and evaluate potential risks, adopt measures to manage these risks, and continuously review and update itself for effectiveness.

”Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system…” (Article 10)

This indicates that the data sets used in the development of your AI system must abide by certain quality criteria and data governance practices. These include data collection and processing operations, potential included biases, relevant data gaps, and more.

”The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to-date.” (Article 11)

Prior to launching your AI system, you are required to prepare technical documentation demonstrating the system’s compliance with the regulation’s requirements. This documentation should provide enough information for authorities to assess the AI system’s compliance and should be updated when necessary.

AI systems specifically referred to in under points 1 to 8a stand for critical use cases and are each considered to be high-risk AI systems pursuant to Article 6(2), provided that they fulfil the criteria set out in that Article (Annex III)

This part of the Annex suggests that all AI systems listed under points 1 to 8(a) are considered high-risk when they meet the criteria detailed in Article 6(2).

”A general description of the AI system including: its intended purpose, the nature of data likely or intended to be processed by the system, the description of the various configurations and variants of the AI system which are intended to be placed on the market or put into service; a detailed and easily intelligible description of the system’s main optimisation goal or goals; a detailed and easily intelligible description of the system’s expected output and expected output quality.” (Annex IV Part 1)

To launch a generative AI system under the AI Act, you will need to clearly define and communicate the basic details of the AI system.

”A detailed description of the elements of the AI system and of the process for its development.” (Annex IV Part 2)

In-depth details on the elements of the AI system and its development process should be provided, including the data that has been used for training the AI, its source, and the methodologies used for data processing and model training.

”Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to the human oversight measures needed in accordance with Article 14.” (Annex IV Part 3)

Appropriate measures for monitoring the functioning and control of the AI system are required, particularly with regard to human oversight.

”A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61.” (Annex IV Part 8)

To comply with the AI Act, you must have a structured plan to monitor and evaluate the performance of your AI system after it has been launched on the market.

To summarize, launching a generative AI system involves meeting operational requirements including risk management system establishment, data governance practices, and transparent, up-to-date technical documentation. Additionally, you would need to specifically fulfil criteria for high-risk systems, clearly describe your system’s purpose, planned usage, development process, and provide a framework for human oversight and post-launch performance evaluation.