According to the AI Act, what steps are required for AI systems that fall under the high-risk category?

Gist 1

High-risk AI systems shall comply with the requirements established in this Chapter. (Article 8)

This Article stresses that high-risk AI systems are required to abide by certain obligations according to the AI Act. The compliance of such systems with these laid down rules forms a fundamental part of their operation.

A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. (Article 9)

The Act emphasizes that a risk management system must be established and consistently maintained when deploying high-risk AI systems. The system should span across the entire lifecycle of the AI system indicating a need for continuous risk assessment and management.

High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application. (Article 10)

To ensure the safe and efficient operation of high-risk AI systems, only quality datasets can be utilized for training, validating, and testing. This implies that effective dataset management and scrutiny are integral to AI system development.

Biometric and biometrics-based systems \n> AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5;\n> AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III, 1)

This part of the Act identifies AI systems used for biometric identification or emotional recognition as high-risk. It implies that AI systems that handle sensitive personal data must ensure top-tier compliance with regulations due to the risks they pose.

The provider verifies that the established quality management system is in compliance with the requirements of Article 17. (Annex VI)

This indicates that as a provider of a high-risk AI system, it’s crucial to ensure your quality management system aligns with the Act’s requirements as laid out in Article 17. This system forms the backbone of managing the AI system’s implementation and lifetime.

Conformity based on assessment of quality management system and assessment of the technical documentation is the conformity assessment procedure based on points 2 to 5. (Annex VII)

Conformity for high-risk AI systems is based on the evaluation of the quality management system and the technical documentation. Annex VII stresses the necessity for these two components in ensuring AI systems adhere to the standards stipulated in the AI Act.

In conclusion, developing and deploying a high-risk AI system under the AI Act requires one to understand and meet the obligations set forth in the Act. This involves comprehending what designates an AI system as high-risk, establishing a risk management system, ensuring the use of quality data, preparing accurate technical documentation, setting up a compliant quality management system, and conforming to the regulatory procedures outlined in Annexes VI and VII.

Gist 2

Here is my analytical response to your question regarding the steps required for AI Systems under the high-risk category according to the AI Act:

“Classification rules for high-risk AI systems.” (Article 6)

Article 6 specifies the classification rules for high-risk AI systems. According to this Article, an AI system will be considered high-risk if it is used as a safety component of a product, or the AI system is the product itself. The AI system or the product, where the safety component is the AI system, must undergo third-party conformity assessments before it can be placed on the market or used. Additionally, AI systems in critical areas and use cases referred to in the relevant Annex III will also be considered high-risk if they pose substantial threats to the health, safety, or fundamental rights of people, or the environment.

”The provider verifies that the established quality management system is in compliance with the requirements of Article 17.” (Annex VI: 2)

In regard to quality management, the provider of the high-risk AI system must ensure that their quality management system meets all the criteria laid out in Article 17. This implies an established Quality Management System (QMS) must be in place and it is the provider’s responsibility to ensure compliance.

”AI systems specifically referred to under points 1 to 8a stand for critical use cases and each is considered to be high-risk AI systems pursuant to Article 6(2), provided that they fulfill the criteria set out in that Article.” (Annex III)

Additionally, specific AI systems mentioned in Annex III are considered high-risk if they fulfill the specific criteria detailed for each use case in Article 6(2). This Annex elaborates on a broad spectrum of sectors such as biometrics, migration, asylum and border control management, and the administration of justice and democratic processes.

”The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 51 (1).” (Annex VIII: Section A)

In light of registration, vital information must be maintained and kept up-to-date for high-risk AI systems to be registered in accordance with Article 51(1). This includes providing specific details of the trade name of the AI system, the name and the contact details of the provider, specifics of the AI system’s intended purpose, its components, functions, and logic, the system’s current status, and certification details if applicable.

Conclusively, these steps ensure that high-risk AI systems comply with the rigorous requirements set out in the AI Act which include classification, compliance with the established quality management system, critical use case evaluation, and mandatory registration. The goal is to maintain a higher degree of transparency, accountability, and safety.

Gist 3

“The AI system shall be considered a high-risk AI system where it is intended to be used as a safety component of a product or is itself a product covered by the Union harmonisation legislation listed in Annex II…” (Article 6)

As stated in Article 6, an AI system is categorized as “high-risk” if it is either meant to be used as a safety component of a product, or if it itself is a product that falls within the legislation delineated in Annex II of the Act.

”The AI system, including its programming, shall not pose risks to the health and safety of persons or of property that are unacceptable when used under the conditions laid down in the Union harmonisation legislation listed in Annex II.” (Article 8)

This quote from the Act defines the obligation of an AI system that has been classified as high-risk to ensure that it does not pose undue or unacceptable risks to either personal health and safety or property when it is being operated under the conditions specified in the Union harmonisation legislation that is listed in Annex II.

”…high-risk AI systems, and to the conformity of such systems with the requirements of this Regulation, with the exception of any relevant subsequent requirements which might be imposed by individual Union harmonisation legislation.” (Article 9)

From Article 9, it is clear that the conformity of high-risk AI systems with the requirements of the AI Act is extremely important. However, the Act also nods to the expectation of aligning with any individual Union legislation requirements that may later be imposed.

”AI systems intended to be used as a safety component of a product, or which are themselves products, covered by the Union harmonisation legislation listed in Annex II.” (Article 10)

This excerpt specifically stipulates that any AI system intended to be used as a safety component of a product, or any AI system that itself constitutes a product, and which is covered under the legislation outlined by Annex II, falls under the purview of the Act.

”Member States shall lay down the rules on penalties applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are implemented.” (Article 51)

Article 51 implies that Member States will lay down rules pertaining to penalties for infringements of the AI Act’s regulations. Furthermore, these countries must also ensure that all necessary measures are taken for the implementation of these rules.

”The provider verifies that the established quality management system is in compliance with the requirements of Article 17.” (Annex VI)

According to the Act’s Annex VI, it is the responsibility of the provider of the high-risk AI system to verify that the quality management system is in compliance with all requirements outlined in Article 17 of the Act.

”…ensure that the AI system’s design and development process, as well as its post-market monitoring, is in line with the stipulations of the referenced Article 61 and provided technical documentation.” (Annex VI)

This statement from Annex VI means that it’s mandatory to verify if the AI system’s design and development process is in line with the regulations defined in Article 61 of the Act, as well as with its technical documentation.

”The application of the provider shall include: … a list of AI systems covered under the same quality management system; the technical documentation for each AI system covered under the same quality management system; the documentation concerning the quality management system which shall cover all the aspects listed under Article 17; a description of the procedures in place to ensure that the quality management system remains adequate and effective.” (Annex VII, 3.1)

Stated in Annex VII, the above quote signifies that there are various components that must be included in the provider’s application. These include a comprehensive list of AI systems that fall under the same quality management system, the technical documentation for each AI system listed, the documentation detailing the quality management system according to the aspects of Article 17, and a description of procedures that ensure sustained adequacy and effectiveness of the quality management system.

”In examining the technical documentation, the notified body may require that the provider supplies further evidence or carries out further tests so as to enable a proper assessment of conformity of the AI system with the requirements set out in Title III, Chapter 2.” (Annex VII, 4.4)

Annex VII also states that the notified body, which is responsible for examining the technical documentation, may ask for additional evidence from the provider, such as further testing, to ensure that the AI system complies with the requirements outlined in Title III, Chapter 2.

”Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended purpose shall be approved by the notified body which issued the EU technical documentation assessment certificate.” (Annex VII 4.7)

Lastly, any substantial updates or changes proposed for the AI system that may impact its compliance with the regulations or its intended purpose have to be approved by the notified body under which the AI system’s EU technical documentation assessment certificate was issued.

Gist 4

High-risk AI systems shall comply with the requirements established in this Chapter. (Article 8)

This provision fundamentally states that AI systems that find themselves classified as ‘high-risk’ under this act must adhere to set requirements.

A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. (Article 9)

From this, we can comprehend that one of the primary requirements is to establish a risk management system in relation to high-risk AI systems. This system is not momentarily, but it must span across the entire lifecycle of the AI system.

The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date. (Article 11)

This quote underlines the obligation to have proper technical documentation that not only ought to be prepared before the system is launched or put into operation but should be consistently updated.

The AI systems specifically refered to in under points 1 to 8a stand for critical use cases and are each considered to be high-risk AI systems pursuant to Article 6(2), provided that they fulfil the criteria set out in that Article: (Annex III)

Here, AI systems denoted under points 1 to 8a are regarded as high-risk under the act, given that they satisfy the criteria laid out in Article 6(2).

For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following procedures; (a) the conformity assessment procedure based on internal control referred to in Annex VI; or (b) the conformity assessment procedure based on assessment of the quality management system and of the technical documentation, with the involvement of a notified body, referred to in Annex VII; (Article 43)

We can grasp from this provision that, for AI systems that follow the rules in point 1 of Annex III and demonstrate compliance with requirements, the provider must opt for either an internal control-based procedure or an external notified body to assess the system’s technical documentation and quality management.

The provider shall draw up a written machine readable, physical or electronic EU declaration of conformity for each high-risk AI system and keep it at the disposal of the national supervisory authority and the national competent authorities for 10 years after the AI high-risk system has been placed on the market or put into service. (Article 48)

In conjunction with this, providers need to compose a machine-readable EU declaration of conformity for each high-risk AI, staying accessible for national supervisory authorities for ten years following the AI system’s introduction in the market or into service.

Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60, in accordance with Article 60(2); (Article 51)

Finally, any high-risk AI system destined for the public or activation must be recorded in the EU database under Article 60 prior to its release, irrespective of whether it is the provider or an authorized representative deploying the system.

All these provisions collectively form the required steps for high-risk AI systems under the AI Act from classification to market deployment namely: classification as high-risk, abiding by the mandatory requisites, managing proper technical documentation with consistent updates, compliance with a conformity assessment procedure, and registration in the EU database.