Can you describe the responsibilities of entities making use of AI systems in a professional capacity under the AI Act?

Gist 1

Providers of high-risk AI systems shall: (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; (b) have a quality management system in place which complies with Article 17; (c) draw-up and keep the technical documentation of the high-risk AI system referred to in Article 11; … (e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service, in accordance with Article 43; (f) comply with the registration obligations referred to in Article 51; (g) take the necessary corrective actions as referred to in Article 21 and provide information in that regard;

(Article 16)

This Article defines the obligations of providers of high risk AI systems, which includes ensuring system compliance prior to placing them into the market, having a quality management system (Article 17), keeping technical documentation (Article 11), conformity assessment procedures before putting the system into service (Article 43), registration of the system (Article 51), and necessary corrective actions (Article 21).

Any distributor, importer, deployer or other third-party shall be considered a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service;

(Article 28)

This excerpt from Article 28 establishes that any distributor, importer, deployer, or third-party shall bear the same responsibilities as the provider of AI system if they put their name or trademark on high-risk AI systems already placed on the market or put into service.

Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60, in accordance with Article 60(2);

(Article 51)

This Article mandates that providers must register high-risk AI systems in the EU database before they are put into the market or into service.

Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article.

(Article 29)

Article 29 lays out that deployers of high-risk AI systems must adhere to technical and organizational measures to ensure that these systems are used as per the instructions accompanying them.

Gist 2

Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems. (Article 29)

This indicates that deployers have a responsibility to ensure that high-risk AI systems are used as per the instructions provided by the respective systems. This means these entities must put in place technical and organizational measures that facilitate appropriate and responsible use of these AI technologies.

Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk… they shall, without undue delay, inform the provider or distributor and relevant national supervisory authorities and suspend the use of the system. (Article 29)

In addition to ensuring use as per system instructions, the deployers are also tasked with monitoring the operation of these high-risk AI systems. If they identify any use that presents a risk, they must promptly inform the relevant parties, including the AI system provider, distributor, and national supervisory authorities. The use of the problematic system should also be suspended.

Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance with this Regulation, for ex-post audits of any reasonably foreseeable malfunction, incidents or misuses of the system, or for ensuring and monitoring for the proper functioning of the system throughout its lifecycle. Without prejudice to applicable Union or national law, the logs shall be kept for a period of at least six months. (Article 29)

Deployers are required to maintain logs generated by the high-risk AI systems. These logs are paramount in demonstrating compliance with the regulation, paving the way for post-implementation audits, aiding in identifying any potential issues or misuse, and continuously monitoring the functioning of the system. In essence, keeping these logs assists in maintaining transparency and proactive surveillance over the lifecycle of these high-risk AI systems.

Deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies or undertakings referred to in Article 51(1a)(b) shall comply with the registration obligations referred to in Article 51. (Article 29)

This provision stipulates that deployers also have an essential legal obligation to comply with the registration requirements set out in the AI Act. This particularly applies to deployers who are public authorities or union institutions, amongst others. This registration is part of the accountability measures under the AI Act to ensure traceability and control of high-risk AI systems.

Prior to putting a high-risk AI system… into use, deployers shall conduct an assessment of the systems’ impact in the specific context of use. This assessment shall include, at a minimum, a clear outline of the intended purpose for which the system will be used; a clear outline of the intended geographic and temporal scope of the system’s use; categories of natural persons and groups likely to be affected by the use of the system… (Article 29a)

Before deploying a high-risk AI system, entities are mandated to carry out an impact assessment of the system’s use in the specific environment. This assessment includes defining the system’s intended use, identifying its geographic and temporal scope, and predicting the groups of persons likely to be affected by the system’s use. This reflects the EU AI Act’s focus on ensuring that the usage of AI does not compromise citizens’ rights, health, and safety.

Any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Title III, Chapter 3 of this Regulation. (Article 63)

According to Article 63, entities making use of AI systems in a professional capacity (referred to as “economic operators” under Regulation (EU) 2019/1020) fall within the purview of the AI Act. As such, these entities are covered under the surveillance and control measures specified under the AI Act concerning the EU market.

National supervisory authorities may: (a) carry out unannounced on-site and remote inspections of high-risk AI systems; (b) acquire samples related to high-risk AI systems, including through remote inspections, to reverse-engineer the AI systems and to acquire evidence to identify non-compliance. (Article 63)

Article 63 further specifies national supervisory authorities’ powers. They may carry out unannounced inspections of high-risk AI systems and even acquire samples of these systems to identify non-compliance. As a result, entities utilizing such systems in a professional capacity have a responsibility to allow these inspections and cooperate with authorities, indicating the need for transparency in operations.

Where, in the course of that evaluation, the national supervisory authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period. (Article 65)

Article 65 indicates there is a responsibility for entities to take corrective actions when their AI systems are found to be non-compliant with the AI Act’s requirements and obligations. Corrective measures could include bringing the AI system into compliance, withdrawing it from the market, or recalling it within a reasonable period.

The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the market throughout the Union. (Article 65)

Again in Article 65, it is highlighted that operators have a responsibility to ensure necessary corrective action is taken for all related AI systems they’ve made available on the market throughout the Union.

Where, having performed an evaluation under Article 65, the national supervisory authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a serious risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights, or the environment or the democracy and rule of law or to other aspects of public interest protection , it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk. (Article 67)

Article 67 prescribes that even if an AI system is found to be compliant with this Regulation, but it poses serious risks to health, safety, fundamental rights, environment, democracy, rule of law, or other aspects of public interest protection, the operator must take necessary measures to remove such risks. Here, the operator’s responsibility extends beyond mere compliance and involves ensuring the AI systems do not present potential harm or risks.

Gist 3

Providers of high-risk AI systems shall: […] (Article 16)

Under Article 16 of the AI Act, providers of high-risk AI systems carry a number of responsibilities. These include the need to ensure their systems are compliant with the requirements of Chapter 2 before placing them on the market, having a quality management system in place, maintaining technical documentation and logs, and undergoing conformity assessment procedure prior to launching systems. This Article also highlights the requirement for providers to take corrective action as necessary, demonstrate conformity to the AI Act, and comply with registration obligations. Additional obligations include attention to human oversight and bias, clarity on input data used and the risk of misuse, and adherence to accessibility requirements.

Where a high-risk AI system related to products to which the legal acts listed in Annex II, section A, apply, is placed on the market or put into service together with the product manufactured in accordance with those legal acts and under the name of the product manufacturer, the manufacturer of the product shall take the responsibility of the compliance of the AI system with this Regulation and, as far as the AI system is concerned, have the same obligations imposed by the present Regulation on the provider. (Article 24)

Article 24 specifies the duties of product manufacturers when a high-risk AI system related to their products is launched. If a high-risk AI system is placed on the market or put into service with the product under the manufacturer’s name, the manufacturer assumes responsibility for the system’s compliance with the AI Act and holds the same responsibilities as the providers, as specified under Article 16.

Prior to making their systems available on the Union market, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union. (Article 25)

Article 25 sets out requirements for providers located outside of the Union. Before they can launch their systems on the Union market, they need to appoint an authorised representative located in the Union. This representative is responsible for ensuring that the necessary compliance procedures have been followed, the technical documentation is available, and the system’s logs are accessible in the event of a request from a national supervisory authority. The representative’s mandate also requires their cooperation with national supervisory bodies in taking necessary action to reduce and mitigate the risks posed by the system. They also must ensure the accuracy of information provided for registration obligations.

Before placing a high-risk AI system on the market, importers of such system shall ensure that such a system is in conformity with this Regulation […] (Article 26)

In Article 26, the AI Act identifies the responsibilities of importers of high-risk AI systems. Importers need to be certain that the system complies with the regulation before it hits the market, which includes checking for adherence to criteria such as the conformity assessment procedure, technical documentation requirements, and marking of conformity. Importers are held responsible for ensuring that the product doesn’t pose a risk within the definition of Article 65(1), and must report non-compliance or risk to the market surveillance authorities and the provider. They are also required to maintain their own contact information on the system and ensure appropriate storage and transportation. Lastly, they are expected to cooperate with national competent authorities and provide all necessary information and documentation upon request.

Gist 4

In the EU AI Act, the responsibilities are clearly described for those entities classified as ‘providers’ and ‘deployers’.

‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge. (Article 3(2))

This clarifies that a ‘provider’ is the entity involved in the development and commercial release of an AI system. This can include a broad range of individuals or organizations, including public authorities and agencies. The deployment of the AI system can either be done for payment or as a non-commercial activity.

‘deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. (Article 3(4))

The Act defines ‘deployers’ as entities that implement AI systems for any purpose other than personal non-professional activities. This highlights that any individual or organization, including public bodies, can be classified as a ‘deployer’ under the Act if they utilize an AI system within their operations.

Given the Act’s definitions, entities falling under the ‘provider’ and ‘deployer’ categories bear responsibilities for ensuring that the AI systems they develop, market, or use comply with the relevant legal requirements. This may include abiding with transparency and documentation requirements, adhering to recognized ethical principles such as fairness and non-discrimination, and more. However, the exact responsibilities can vary depending on the specific rules applicable to the type of AI system being used, its intended use, and other contextual factors.

Further responsibilities and obligations for providers and deployers are distributed throughout the Act. For a comprehensive understanding, a thorough review of the AI Act is recommended.