(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; (Article 3)
If an individual trains an AI model on personal data on their device, they could potentially be classified as a provider under the AI Act, as it applies not only to AI systems on the market but also to systems developed for personal use.
(10) ‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge; (Article 3)
If a trained model is made available on the market (shared, sold, etc.), it would fall under the Act’s regulations. Both commercial and non-commercial distribution could qualify as distribution under the AI Act.
- The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; (Article 5)
The provisions highlighted in Article 5 suggest that if a model trained on personal data and placed on the market or put into service uses techniques beyond a person’s consciousness, or manipulative/deceptive techniques causing harm, would be prohibited.
For clarity regarding the application of AI Act on AI models trained on personal data for personal use in personal devices, further analysis of the Title III, Chapter 2, Title IV, and Title IX is necessary, as these Titles specific address the requirements for high-risk AI systems, transparency obligations, provisions on codes of conduct, and other general principles related to protection of personal data rights.
However, we can summarize that if a trained AI model is put in the market or used manipulative techniques that can distort a person’s behavior or cause harm, then it falls under the regulations of the AI Act. If it is just being used for personal purposes and not made available on the market, the Act does not provide clear restrictions.
AI systems intended to be used for biometric identification of natural persons, with the exception of those mentioned in Article 5; (Annex III.1)
AI systems intended to identify individuals based on their biometrics (like facial recognition systems or fingerprint scanners, which might be present on personal devices) are considered as high risk under the AI Act if they fulfill conditions provided in Article 6(2).
AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III.1aa)
AI models in personal devices that process personal or biometric data to infer individuals’ personal characteristics, such as emotion recognition systems, will be deemed high-risk if they fulfill the conditions under Article 6(2).
- Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)
Any AI system that is meant to be used as a safety component of a product or is itself a product, and is required to undergo a third-party conformity assessment is considered high-risk. This applies regardless of the AI system being brought to the market or put into service independently. This classification is crucial as it may impact the level of scrutiny accorded to such systems.
High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to the state of the art and recognised standards or common specifications. (Article 12)
Article 12 suggests that high-risk AI systems, which might include AI models on personal devices trained on personal data, must be designed and developed with auto-recording or logging capabilities. The data logged will be critical in maintaining a traceable record of the system’s functioning and ensuring conformity to standards or specifications.
Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)
Providers of AI systems, particularly the ones that intend to interact with natural persons, must ensure clarity and transparency. Timely, intelligible information should be provided to the person exposed to such an AI system, ensuring proper interaction and transparent communication about the AI systems’ operation.
“This Regulation applies to: providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country.” (Article 2)
This provision affirms that the AI Act applies to all AI system providers operating within the Union, even those based in third countries. The implication here is that the Act will also regulate AI models in personal devices trained on personal data, assuming these are considered “AI systems” under the Act.
”The following artificial intelligence practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision.” (Article 5)
Article 5 restricts practices where an AI system influences a person’s behaviour or decision-making abilities. If an AI model trained on personal data from a personal device is found to manipulate or distort behaviour, it would fall under this prohibition.
”Software and data that are openly shared and where users can freely access, use, modify, and redistribute them, or modified versions thereof, can contribute to research and innovation in the market… Users are allowed to run, copy, distribute, study, change and improve software and data, including models by way of free and open-source licenses.” (Recital 12a)
Recital 12a emphasizes the contribution of open-source software in promoting research and innovation. Therefore, AI models trained on personal data that are shared openly and retain open-source licensing remain largely free from AI Act regulation.
”To foster the development and deployment of AI, especially by SMEs, startups, academic research but also by individuals, this Regulation should not apply to such free and open-source AI components except to the extent…” (Recital 12a)
However, stipulations exist for the free use of open-source AI components. The Act will apply to open-source AI components if used as part of high-risk AI systems, or if they fall under specific sections of the regulations when placed on the market or into service by a provider.
In summary, AI models in personal devices trained on personal data are regulated under the AI Act if they are considered AI systems. Activities distorting user behaviour are restricted, and models under open-source licenses are largely free from these regulations, provided they don’t form part of high-risk systems or violate specific sections of the AI Act.
This Regulation applies to: (a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (b) deployers of AI systems that have their place of establishment or who are located within the Union; (c) providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union; (Article 2)
The AI Act applies to the providers and deployers of AI systems, including those whose products are used in the European Union, whether they are based within the EU or outside. This means that AI models trained on personal data in personal devices could fall under the Act’s regulations when they are put to use in the EU.
High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application. (Article 10)
This provision sets out that AI high-risk systems, which includes those that are trained on personal data, need to follow specific quality criteria for their training, validation, and testing datasets. This signals how seriously the AI Act takes data quality and governance, especially in relation to personal data.
Datasets shall take into account, to the extent required by the intended purpose or reasonably foreseeable misuses of the AI system, the characteristics or elements that are particular to the specific geographical, contextual behavioural or functional setting within which the high-risk AI system is intended to be used. (Article 10)
This additional provision from Article 10 emphasizes the requirement for datasets to consider various factors that could affect the AI system’s usage, including geographical, functional, contextual, and behavioral attributes. This means that AI models trained with personal data on personal devices must have their dataset designed with these considerations in mind.
The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to-date. (Article 11)
In terms of AI governance, Article 11 sets out an obligation for AI providers to compile comprehensive technical documentation before their AI system can be put on the market, and to keep it updated. This would apply to AI models trained on personal data in personal devices, providing both transparency and accountability.
Access to data of high quality plays a vital role in providing structure and in ensuring the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become a source of discrimination prohibited by Union law. (Recital 44)
Recital 44 adds further emphasis on the importance of high-quality data to the successful performance of AI systems. It also indicates a primary concern of the AI Act is to prevent discrimination, especially for individuals who contribute personal data to AI models.
High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, and where applicable, validation and testing data sets, including the labels, should be sufficiently relevant, representative, appropriately vetted for errors and as complete as possible in view of the intended purpose of the system. (Recital 44)
For AI models trained on personal data, the Act reinforces the importance of good data governance and the need to have comprehensive and representative training, validation, and testing datasets. This includes the requirement to thoroughly test and validate the data used in models, reinforcing the Act’s focus on data governance.
Results provided by AI systems are influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing discrimination, in particular for persons belonging to certain vulnerable or ethnic groups, or racialised communities. (Recital 44)
The Act recognizes that inherent biases in AI could lead to discrimination, and that these biases could be especially harmful to certain groups. This consideration is particularly relevant for AI systems deployed in personal devices that collect and process personal data - the Act expects data training and model design to carefully avoid entrenching or perpetuating biases.
There were no applicable provisions found in Annex III regarding AI models trained on personal data in personal devices.
In conclusion, the AI Act views AI systems processing personal data as high risk. It provides regulations to ensure data privacy and fairness, emphasizes robust data governance, outlines the quality of data needed, and requires comprehensive technical documentation. The Act applies these rules to all providers and users of such AI systems in the EU, wherever they are based.
providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country; (Article 2)
The AI Act regulates any entities that are deploying AI systems inside the EU, regardless of their physical location. This clause means that models on personal devices must comply with the regulation if they are put into service within the EU’s jurisdiction.
This Regulation applies to providers and deployers of AI systems that have their place of establishment or who are located in a third country, where either Member State law applies by virtue of a public international law or the output produced by the system is intended to be used in the Union; (Article 2)
In addition to the previous point, even when the provider or deployer is located outside the EU, if the model’s output (e.g., predictions based on personal data) is consumed within the EU, the system must comply with the Act. This inclusion further expands the applicability of the Act to AI systems deployed on personal devices, highlighting a broad scope of application wherever the AI system’s output is utilized.
Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal data processes in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect Regulations (EU) 2016/679 and (EU) 2018/1725 and Directives 2002/58/EC and (EU) 2016/680 (Article 2)
From the context of personal devices trained on personal data, the AI Act does not infringe upon existing regulations (specifically GDPR (Regulation (EU) 2016/679)) that concern the protection of personal data and privacy rights. Therefore, any processing of personal data by models on personal devices must comply with existing privacy and data protection legislation such as GDPR.
AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. (Article 6)
If a model, including the one trained on a personal device, falls under one of the high-risk categories indicated in Annex III and poses a significant risk to health, safety, or fundamental rights, it will be deemed a high-risk AI system. It’s regulated under the provisions for high-risk AI systems, including stricter oversight and approval processes.
Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk as described in paragraph 2, they shall submit a reasoned notification to the national supervisory authority that they are not subject to the requirements of Title III Chapter 2 of this Regulation. (Article 6)
Where the model is used in one of the areas specified in Annex III but the provider deems it does not pose a significant risk, the provider will be required to provide a reasoned notification to the national supervisory authority indicating why the system should not need to comply with certain high-risk AI system requirements. This would apply to models on personal devices if they fall under the listed areas and the provider has asserted the model does not pose a significant risk.
AI systems intended to be used to make inferences about personal characteristics of natural persons based on biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III - Part 1(aa))
This point suggests that an AI system implemented in personal devices, that uses biometric data and has been trained on personal data, might be classified as high-risk if used for inferring personal characteristics.
In conclusion, the EU AI Act regulates models on personal devices trained with personal data based on their location of use, context of use, potential risks posed, and their compliance with existing EU data protection and privacy laws. This regulatory coverage includes possible classifications as high-risk AI systems under specific conditions. Depending on the specific use case and risks posed, providers may need to notify national supervisory authorities about the non-applicability or otherwise adhere to higher safety and transparency standards.