(cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union. (Article 2)
This quote designates âaffected persons,â defined later in Article 3(8a) as any individual or group that is subject to or otherwise affected by an AI system, as potential end users. It extends the scope of the regulation to cover consumer protection implicitly, since the affected persons can be considered as end users of AI systems.
(8a) âaffected personâ means any natural person or group of persons who are subject to or otherwise affected by an AI system; (Article 3)
This definition explicitly covers end users of AI systems because it includes any person or group of persons who interact with or are affected by an AI system.
- This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection and product safety; (Article 2)
This quote underscores that the AI Act does not supersede existing regulations or laws that aim to safeguard consumers rights and ensure product safety. It suggests that these rules will still apply in the context of AI systems.
- Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner (Article 52)
This quote from Article 52 deals with transparency obligations for providers. It implies that end users have a right to know and be informed when they are interacting with AI systems. This should be mentioned in a timely, clear, and understandable way, which shields consumers from deception or from unknowingly using AI systems.
3a. The information referred to in paragraphs 1 to 3 shall be provided to the natural persons at the latest at the time of the first interaction or exposure. It shall be accessible to vulnerable persons, such as persons with disabilities or children (Article 52)
This quote implies that the transparency requirements should be extended even to vulnerable individuals, such as children or persons with disabilities. This makes the AI Act inclusive and empowers all end users, regardless of their circumstances, to appropriately understand and interact with AI Systems.
A general description of the AI system including: ⌠(aa) the nature of data likely or intended to be processed by the system and, in the case of personal data, the categories of natural persons and groups likely or intended to be affected; (Annex IV:1(aa))
The AI Act indeed indirectly addresses end users or consumers in its provisions. The above section depicts that AI systems must provide a description of the nature of data they are likely to process, for instance, personal data. Notably, they also need to specify the categories of natural persons (that is, end users or consumers) who are likely to be affected by the systemâs data processing. Hence, it implies that AI system providers have to take user data seriously and should clearly communicate the type of user data they process.
(g) instructions of use for the deployer in accordance with Article 13(2) and (3) as well as 14(4)(e) and, where applicable installation instructions (Annex IV:1(g))
In this provision, it outlines the need for proper instructions of use for the deployer, including installation instructions when needed. This aspect is critical as it ensures that any interaction between end-users (via deployers) and the AI system happens seamlessly and effectively, thus indicating that the EU AI regulation is consumer-centric.
Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used⌠(Annex IV:3)
This clause emphasizes that a detailed report must be given about the monitoring, functioning, and control of the AI system, particularly its capabilities and limitations in performance. It further specifies that information should be included regarding the degree of accuracy for specific persons or groups of persons (consumers/ end users) for those the system is intended to be used. This provision again indirectly highlights the importance of transparency and fairness toward end users.
In conclusion, while the AI Act does not set aside a specific provision addressing end users or consumers, it indirectly emphasizes their protection and rights through provisions aimed at transparency, fairness, and accountability in the operation and deployment of AI systems. The general theme is undoubtedly focused on enhancing the transparency of AI systems, ensuring consumer protection, and making AI systems more accountable for their actions.
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a personâs consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a personâs or a group of personsâ behaviour by appreciably impairing the personâs ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; (Article 5)
This part of Article 5 shows that the AI Act is looking to protect consumers from AI systems that could manipulate or deceive them, taking away their ability to make informed decisions. This specifically addresses the use of AI for subliminal messaging or manipulation, which could lead to significant harm.
Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. (Article 52)
The provision from Article 52 stipulates that providers have to ensure proper notification to end-users about their interactions with an AI system. This is important for end user transparency, and it helps ensure that consumers are aware when they are interacting with an AI and not a human.
Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance with this Regulation, for ex-post audits of any reasonably foreseeable malfunction, incidents or misuses of the system, or for ensuring and monitoring for the proper functioning of the system throughout its lifecycle. (Article 29)
Article 29 mandates that deployers of high-risk AI systems have to maintain logs of these systemsâ operations. By enforcing keeping and maintaining these logs, reliability and accountability will be ensured which, in-turn, increases consumer protection.
In conclusion, the EU AI Act does have provisions that address the rights and protections of end users or consumers. The general theme of these provisions is to protect the end-user from manipulative or harmful use of AI systems, and maintain transparency about the operation and interaction with these AI systems.
(cc) affected persons as defined in Article 3(8a) that are located in the Union and whose health, safety or fundamental rights are adversely impacted by the use of an AI system that is placed on the market or put into service within the Union. (Article 2)
This part of Article 2 indicates that the provisions of the AI Act are applicable to those impacted by AI systems within the EU (who are defined as âaffected personsâ in Article 3(8a)). These could include end users or consumers of AI systems whose health, safety, or fundamental rights might be affected negatively by AI systems in the Union.
âaffected personâ means any natural person or group of persons who are subject to or otherwise affected by an AI system; (Article 3)
This is the definition of the term âaffected personâ referred to in Article 2. An affected person is any individual or group that is subject to or affected by an AI system. This could potentially include end users or consumers who interact with or are subject to decisions made by AI systems.
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and us to reasonably understand the systemâs functioning. (Article 13)
From Article 13, itâs clear that the AI Act calls for transparency in the operation of high-risk AI systems. This requirement would affect both providers and users of such systems, which includes end users and consumers. This provision seeks to ensure that both these groups reasonably understand the systemâs functioning, indicating an intent to protect their interests by promoting transparency.
High-risk AI systems shall be accompanied by intelligible instructions for use in an appropriate digital format or made otherwise available in a durable medium that include concise, correct, clear and to the extent possible complete information that helps operating and maintaining the AI system as well as supporting informed decision-making by users and is reasonably relevant, accessible and comprehensible to users (Article 13)
The latter part of Article 13 mandates that high-risk AI systems be accompanied by clear instructions, informing end users or consumers about operating and maintaining the AI system and supporting informed decision making.
A general description of the AI system including: (aa) the nature of data likely or intended to be processed by the system and, in the case of personal data, the categories of natural persons and groups likely or intended to be affected; (Annex IV.1 (aa))
This provision defines the expected contents in the technical documentation about the nature of data intended to be processed by the AI system. Particularly, this clause directly addresses end users or consumers by specifying the necessity to identify the categories of natural persons likely to be affected.
Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; (Annex IV.3)
This provision aims to provide end users or consumers with a clear understanding of the performance capabilities and limitations of the AI system. By outlining specific degrees of accuracy for different groups, end users or consumers can be better informed about the systemâs functionality.
In summary, the AI Act does seem to have several provisions that directly or indirectly concern end users or consumers of AI systems, especially those categorized as high-risk. The overall theme across these provisions appears to be ensuring protections for these individuals in the context of their interaction with AI systems, with a strong emphasis on transparency and accessibility of information.
AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III - 4(a))
This provision discusses high-risk AI systems used for recruitment and selection processes of individuals. The AI Act acknowledges that such systems may have a significant impact on individuals and their opportunity for employment, as these systems can influence hiring decisions by targeting job advertisements and screening applications.
AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III - 4(b))
The Act also considers the impact of AI systems in work-related decisions and the management of workers. AI systems that materially influence decisions surrounding the commencement, promotion, and termination of employment contracts, along with those that modulate work tasks based on personal traits and monitor and evaluate performance and behavior fall under this provision, highlighting the Actâs emphasis on the rights and welfare of employees in the face of AI technology.
AI systems intended to be used by or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling and internet, as well as to grant, reduce, revoke, increase or reclaim such benefits and services; (Annex III - 5(a))
In the public sector, the Act stipulates the significant role AI plays in assisting authorities with evaluating individualsâ eligibility for specific benefits or services. This provision acknowledges the increasing use of AI systems in public sectors and their high-risk nature due to their potential to impact individualsâ access to basic services such as healthcare, housing, and utilities.
AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; (Annex III - 5(b))
AI systems are also being used in financial sectors to evaluate an individualâs creditworthiness or establish their credit score, except when used for detecting financial fraud. Such decisions can significantly impact individualsâ financial situation and, in turn, their overall wellbeing, reiterating the critical nature these systems can have on individuals, showing the need for robust regulation.