Is the AI Act regulating ML models in embedded devices that process public data (e.g. from cameras)?

Gist 1

This Regulation applies to:

(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union;

(Article 2)

This article determines that the AI Act applies to AI systems in service in the EU, regardless of their provider’s location. Thus, machine learning (ML) models embedded in devices processing data (e.g., from cameras) come under the Act’s jurisdiction, provided these devices operate within the EU or their system output is used there.

‘‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments; (Article 3)

As per Article 3, AI systems are defined broadly, which seemingly extends to ML models in embedded devices processing public data.

Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II;

(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

(Article 6)

However, not all AI systems fall under the same regulations. Some of them, considered “high-risk,” meet specific requirements. Whether ML models in embedded devices processing public data classify as high-risk depends on whether they’re used as safety components of products or if they require a third-party conformity assessment per the Union harmonisation legislation.

Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article. (Article 29)

To the extent deployers exercise control over the high-risk AI system, they shall implement human oversight according to the requirements laid down in this Regulation. (Article 29)

Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. (Article 29)

Deployers of high-risk AI systems, possibly including those with public data processing ML models, face obligations to ensure correct use of systems, implement human oversight, and monitor operations.

Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance with this Regulation. (Article 29)

Deployers are also responsible for maintaining automatically generated logs, required for proving compliance with the AI Act.

While it is clear that the AI Act applies to machine learning models embedded in devices that process public data, this application depends on whether these models conform to the definition of high-risk. Thus, while the AI Act could potentially regulate such systems, individual cases would require further analysis.

Gist 2

This Regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or who are located within the Union; (c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union; (Article 2)

This means that the AI Act applies to providers and deployers of AI systems no matter their geographical location, as long as the AI systems are produced for or intended to be used in the Union. Machine learning models in embedded devices, such as cameras, would fall under this regulatory jurisdiction if they are provided or deployed by organizations within the Union, or from third countries intending the system’s output for use in the Union.

\u2018\u2018artificial intelligence system\u2019 (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments; (Article 3)

The definition of an AI system, as per the AI Act, includes machine-based systems designed for autonomous operations, and encompasses embedded device machine learning models that can generate outputs influencing physical or virtual environments. Therefore, embedded ML systems that process public data indeed fall under the scope of the AI Act.

Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)

Article 6 further provides that an AI system such as an embedded device processing public data can be classified as high-risk based on its intended use and whether it is subject to third-party conformity assessments for market or service compatibility. This additional classification could subject such systems to further regulation under the AI Act.

In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union and on international level, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI systems established within the Union. (Recital 10)

Recital 10 asserts that the AI Act’s regulations apply to providers and deployers of AI systems, which would include ML models operating within embedded systems. This ensures that regardless of the geographical location of the providers, any model running on a device within the Union is subject to regulation under the Act.

(da) AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; (Annex III: High-risk AI systems referred to in Article 6(2) Point 7 (da))

The AI Act explicitly regulates Machine Learning models in embedded devices processing public data, if these models are used by or on behalf of authorities or agencies to surveil or process data in the context of border management activities.

(b) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union agencies, offices or bodies in support of law enforcement authorities as polygraphs and similar tools, insofar as their use is permitted under relevant Union and national law; (Annex III: High-risk AI systems referred to in Article 6(2) Point 6(b))

The use of ML models within the context of law enforcement, even when embedded in devices, is also regulated under the AI Act. There is potential for these devices to engage with diverse data sources, including public ones, if done in accordance with existing laws and regulations.

In conclusion, the AI Act does regulate ML models in embedded devices that process public data. This scope encompasses locality, intended usage, high-risk classifications, and compliance with additional legal considerations surrounding application context.

Gist 3

This Regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or who are located within the Union; (Article 2)

This indicates that the EU’s AI Act applies to both developers and users of AI systems in the EU. If you are providing or deploying an ML model embedded in a device (for instance, a camera), regardless of your establishment location, you fall under the Act’s jurisdiction.

‘‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments; (Article 3)

The AI Act’s definition of an AI system explicitly encompasses machine learning (ML) models. ML models embedded in devices, which process public data from cameras, are considered AI systems under the Act as they exemplify machine-based systems capable of generating predictions or decisions autonomously.

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)

This suggests that the Act urges developers to ensure that their AI systems, particularly those intended to interact with people, are transparent. For instance, if your ML model, embedded in a device such as a camera, is intended to interact with individuals, it is supposed to make clear to those individuals that they are interacting with an AI system.

Prior to putting a high-risk AI system as defined in Article 6(2) into use, with the exception of AI systems intended to be used in area 2 of Annex III, deployers shall conduct an assessment of the systems’ impact in the specific context of use. (Article 29a)

For high-risk AI systems, the Act mandates an assessment of the system’s impact— performed by the users— before its employment. Whether an ML model embedded in a device such as a camera falls within this high-risk category would depend heavily on specific factors detailed in Article 6 and Annex III.

From the analysis of Annex III:

AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; (Annex III, 7(da))

(f) AI systems intended to be used by or on behalf of law enforcement authorities or by Union agencies, offices or bodies in support of law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences or, in the case of Union agencies, offices or bodies, as referred to in Article 3(5) of Regulation (EU) 2018/1725; (Annex III, 6(f))

These fragments further underline the breadth of the Act — it does regulate AI systems processing public data, particularly those linked to recognition and identification of persons, surveillance or law enforcement. However, the applicability to ML models in public data-processing embedded devices like cameras might depend on the specific use case and objective.

In conclusion, the AI Act does regulate ML models in embedded devices that process public data (e.g., from cameras). However, the degree of regulation and the exact requirements can vary based on various factors like the system’s risk level, user interaction mode, and others.

Gist 4

This Regulation applies to:

(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;

(b) deployers of AI systems that have their place of establishment or who are located within the Union;

(c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union;

(ca) providers placing on the market or putting into service AI systems referred to in Article 5 outside the Union where the provider or distributor of such systems is located within the Union;

(cb) importers and distributors of AI systems as well as authorised representatives of providers of AI systems, where such importers, distributors or authorised representatives have their establishment or are located in the Union;

(cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union. (Article 2)

Article 2 provides a broad scope for the AI Act, covering providers and deployers of AI systems within and outside the EU. This can include embedded devices that process public data.

Artificial intelligence system (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments. (Article 3)

Embedded devices using machine learning to process public data fall under the AI Act’s definition of an AI system.

The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, …

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces …

(Article 5)

Article 5 outlines prohibitions related to AI, including using real-time remote biometric identification systems in publicly accessible spaces. This might apply to certain embedded devices processing public data, such as when facial recognition is used in public spaces.

AI systems intended to be used by or on behalf of competent public authorities or by Union agencies, offices or bodies in migration, asylum and border control management to monitor, surveil or process data in the context of border management activities, for the purpose of detecting, recognising or identifying natural persons; (Annex III 7da)

AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5;

AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III 1a, aa)

Annex III classifies certain AI systems as high risk, including those used for biometric identification and making inferences about personal characteristics based on biometric data. This could encompass embedded devices processing public data, especially if the devices are using biometrics or identifying individuals within public data.

Those legal acts provide the basis for sustainable and responsible data processing, including where datasets include a mix of personal and nonpersonal data. This Regulation does not seek to affect the application of existing Union law governing the processing of personal data, including the tasks and powers of the independent supervisory authorities competent to monitor compliance with those instruments. (Recital 2b)

Recital 2b emphasizes the broader regulatory scope concerning data processing, regardless of whether the data is personal or non-personal. This may cover embedded devices that process public data.

Based on these joint interpretations from Articles, Annexes, and Recitals, it can be deduced that the AI Act does, in fact, regulate those ML models used in embedded devices processing public data. Due to the wide-ranging regulatory coverage and the explicit mention of AI systems used for monitoring, surveillance, or data processing, particularly those systems involving biometric information, any ML models on embedded devices processing publicly accessible data would likely fall under the purview of the AI Act.