Providers of high-risk AI systems shall:
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting into service;
(b) have a quality management system in place which complies with Article 17;
(f) comply with the registration obligations referred to in Article 51; (Article 16)
In Article 16, it is clear that there are rigorous regulations for providers of high-risk AI systems. Before the AI systems are placed on the market or put into service, they need to be compliant with the requirements set out in Chapter 2, which presumably contains standards and conditions for AI systems. Additionally, providers must have a quality management system that aligns with Article 17, and they must comply with registration obligations referred to in Article 51.
Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems. (Article 29)
Article 29 emphasizes the responsibilities of those who deploy high-risk AI systems. Deployers must take appropriate technical and organisational measures to ensure they use AI systems in line with provided instructions, which suggests that these instructions are a vital tool in the regulation of AI use.
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2) the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60, in accordance with Article 60(2); (Article 51)
In line with what has been stated in Article 16, Article 51 reinforces the regulation on AI systems, particularly the obligation of registration before the introduction of a high-risk AI system in the market or for service. The system needs to be registered in an EU database as stated in Article 60.
In conclusion, the AI Act appears to have fairly comprehensive regulations concerning the use and deployment of high-risk AI models. There are obligations for both providers and deployers of these systems, as well as requirements for documentation and adherence to specified standards. Registration of the AI system is also required before it can be put to use. All these steps demonstrate clear regulatory measures.
This Regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or who are located within the Union; (c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of public international law or the output produced by the system is intended to be used in the Union; (Article 2)
The AI Act applies to providers who create the AI systems and the deployers who implement them within their services, covering both the development (providers) and use (deployers) of AI systems.
This Regulation shall not apply to research, testing and development activities regarding an AI system prior to this system being placed on the market or put into service, provided that these activities are conducted respecting fundamental rights and the applicable Union law. The testing in real-world conditions shall not be covered by this exemption. (Article 2)
While providers are regulated under this Act, certain development activities are exempt from this legislation, as long as they respect fundamental rights and existing Union law. However, testing in real-world conditions is not covered by this exemption.
Providers and, where deployers have identified a serious incident, deployers of high-risk AI systems placed on the Union market shall report any serious incident of those systems which constitutes a breach of obligations under Union law intended to protect fundamental rights to the national supervisory authority of the Member States where that incident or breach occurred. (Article 62)
This part indicates that both, providers and deployers of high-risk AI systems are required to report any serious incidentsâinvolving a breach of the obligations under Union law protecting fundamental rightsâto the concerned national supervisory authority. It signifies that the regulation focuses on the deployment phase of AI systems.
National supervisory authorities shall on an annual basis notify the AI Office of the serious incidents reported to them in accordance with this Article. (Article 62)
National supervisory authorities are required to regularly report on serious incidents and thereby stresses the focus on deployment of AI systems.
After a comprehensive assessment of the AI Act, the conclusion is that the Act regulates both the use and deployment of AI systems in a proportionate and balanced manner. The intensity of the regulation is tied to the level of risk that the AI system poses. The Act appears to be drafted with an awareness of the entire lifecycle of AI systemsâfrom development to deploymentâand is generally concerned with limiting potential harm or abuse at all stages. Even so, some more specific elements of the Act do seem to emphasize deployment, such as the requirements around reporting serious incidents during post-deployment.
This Regulation applies to: providers placing on the market or putting into service AI systems in the Union; deployers of AI systems that have their place of establishment or who are located within the Union; (Article 2)
This part of Article 2 specifies that the Act applies to both providers and deployers of AI systems within the European Union. The term âprovidersâ refers to those who develop AI systems and intend to place them on the market or put them into service. The term âdeployersâ refers to users of AI systems under their jurisdiction. This suggests that the Act regulates both the provision (creation, development, and release) and deployment (use) of AI models in the European Union.
âproviderâ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; âdeployer means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; (Article 3)
In Article 3, the Act provides explicit definitions for âprovidersâ and âdeployersâ, which further emphasizes the regulation of both the development and use of AI systems in the Union. This clarifies that regardless of the professional nature or the intent to be commercial, if an entity develops an AI system for placing it on the EU market or putting it to use, these activities are subject to the provisions of this Act. Similarly, any entity deploying an AI system within its authority (for professional purposes, excluding personal non-professional activities) is also covered by the regulation.
A general description of the AI system including: Its intended purpose, the name of the provider and the version of the system reflecting its relation to previous and, where applicable, more recent, versions in the succession of revisions; the nature of data likely or intended to be processed by the system and, in the case of personal data, the categories of natural persons and groups likely or intended to be affected; how the AI system can interact or can be used to interact with hardware or software, including other AI systems, that are not part of the AI system itself, where applicable; (Annex IV, Section 1)
The part of Annex IV requires comprehensive documentation of the AI systemâs business operations, data processes, and relationship with external products or services. All this suggests increased regulation for deploying AI systems, ensuring clarity over their intended uses and any updates or iterations following their initial deployment.
In conclusion, the Act does not explicitly regulate one aspect (provision or deployment) more than the other. Both providers and deployers have stated responsibilities and must adhere to the specified regulations. It has however, put in place measures to ensure transparency and accountability which suggests a keen focus on the deployment and use of AI systems.
The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system⌠(Article 5)
This sets clear prohibitions under the AI Act, which impacts not only the providers but also the users or deployors of AI systems. The prohibitions specifically restrict the use of certain types of AI systems, such as those deploying subliminal techniques or those used for social scoring. It means that users and deployers must be vigilant about the kind of AI systems they are utilizing.
Providers of high-risk AI systems shall: (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; (Article 16)
Article 16 imposes rigorous obligations on the providers. Before placing the AI systems on the market or putting them into service, the providers are tasked to ensure their compliance with specific requirements. This implies that providers have a substantial responsibility in terms of regulation.
Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article. (Article 29)
Article 29 directly specifies some of the obligations the deployers of high-risk AI systems have. This suggests the use of AI models, in particular high-risk ones, is also regulated significantly under the AI Act.
High-risk AI systems and foundation models which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union in accordance with Regulation (EU) 1025/2012 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title or Article 28b, to the extent those standards cover those requirements. (Article 40)
Article 40 implies that the deployment of AI models, particularly high-risk ones, requires conforming to specific harmonised standards. This shows that the AI Act applies robust regulations on the deployment of AI models.
Notifying authorities shall notify the Commission and the other Member States of any subsequent relevant changes to the notification. (Article 32)
As per Article 32, the users or deployers of AI models are obligated to keep the Commission and Member States updated about any significant modifications to the AI systems. This indicates continuous regulation during the deployment and use of AI models.
The Commission shall make publicly available the list of the bodies notified under this Regulation, including the identification numbers that have been assigned to them and the activities for which they have been notified. The Commission shall ensure that the list is kept up to date. (Article 35)
Article 35 stresses the importance of transparency in the application of AI. This impacts particularly the deployers of AI systems, requiring public disclosure and thereby subjecting them to regulations related to accountability and transparency.
When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account. (Article 81)
Article 81 highlights that deploying AI in certain contexts, like those where safety is paramount, will have further regulation. This implies that the usage of AI models in these contexts is heavily regulated under the AI Act.
The EU AI Act imposes rigorous regulation on both the providers and the deployers of AI models, especially of high-risk AI systems. The Act places substantial responsibility on the providers to ensure their productsâ compliance before making them available. On the other hand, the users or deployers are tasked not only with adhering to usage instructions but also with keeping authorities updated on any significant changes, adding to the regulationâs comprehensiveness. Notably, public transparency is emphasized for all involved parties. Therefore both deploying and using AI models are heavily regulated under the AI Act, implying a balanced approach to ensure accountability and safeguard public interest.