If the algorithm used in the AI system showed a tendency to discriminate based on race, gender or sexual orientation, would it be classified as a high-risk system within the meaning of the AI Act?

Gist 1

(1c) ‘foundation model’ means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks; (Article 3)

Foundation models, as defined here, derive their tendencies, including potential biases, from the varied data on which they’re trained. Discrimination based on race, gender, or sexual orientation within an AI system could emerge from this data.

(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly or indirectly leads, might have led or might lead to any of the following: (ba) a breach of fundamental rights protected under Union law, (Article 3)

A ‘serious incident’ in the context of the AI Act encompasses breaches of fundamental rights protected under Union law. It follows that an AI system discriminating on aspects like race, gender or sexual orientation, contravening fundamental rights, could be deemed as causing a ‘serious incident’.

  1. The following artificial intelligence practices shall be prohibited: (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, including characteristics of such person’s or a such group’s known or predicted personality traits or social or economic situation age, physical or mental ability with the objective or to the effect of materially distorting the behaviour of that person or a person pertaining to that group in a manner that causes or is likely to cause that person or another person significant harm; (Article 5)

Under the Act, use of an AI system that exploits vulnerabilities of a person or a group, including their characteristic traits, is explicitly prohibited and could cause significant harm. An AI system displaying discriminatory tendencies would likely violate this provision.

  1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system… (a) identification, estimation and evaluation of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health or safety of natural persons, their fundamental rights including equal access and opportunities, democracy and rule of law or the environment when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse… (Article 9)

A risk management system for a high-risk AI system, as outlined in this provision, includes the mandate to identify, estimate and evaluate potential risks to fundamental rights which include equal access and opportunities. Therefore, it’s reasonable to assert that discriminatory tendencies in an AI system would need to be flagged as risks.

High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application. (Article 15)

That high-risk AI systems are required to be accurate and robust implies discrimination should be absent in their operation. Discrimination would compromise the demanded accuracy, indicating that a discriminatory AI system could likely be deemed high-risk.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III: 4(a))

AI systems for recruitment, selection and evaluation of candidates, while not explicitly mentioning discriminatory tendencies, imply that discrimination in these AI systems could lead to unfair outputs, thus classifying them as high-risk.

AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III: 4(b))

Here, AI systems that influence work-related decisions based on individual characteristics are considered high-risk. It’s plausible that bias leading to unfair determinations would result in an AI system’s classification as high-risk.

AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III: 1(aa))

In implying the high-risk classification for AI systems that employ biometrics and can potentially wield this data to discriminate, discrimination seems likely to factor in consent to high-risk classification.

A detailed description of the elements of the AI system and of the process for its development, including… metrics used to measure accuracy, robustness and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; (Annex IV, Point 2(g))

While not directly classifying a discriminatory AI system as high-risk, the Act does emphasize the need to carry measures to mitigate discriminatory impacts. Hence, the presence of discriminatory tendencies might contribute to a system’s high-risk designation.

In summary, although a system’s high-risk status isn’t directly related to any discriminatory tendencies in the AI Act, it’s likely that the presence of these tendencies could contribute to a system being considered high-risk owing to associated provisions and considerations within the Act. As the Act indicates, these tendencies could prompt significant harm and breaches to fundamental rights, which are factors in deciding the risk level of an AI system.

Gist 2

AI systems specifically referred to under points 1 to 8a stand for critical use cases and are each considered to be high-risk AI systems pursuant to Article 6(2), provided that they fulfill the criteria set out in that Article. (Annex III)

This passage does not specifically refer to bias or discrimination, but it does outline the general basis of defining a high-risk AI system, hinting that Annex III mentions specific applications of AI systems that, if shown bias or discrimination, could possibly be classified as high-risk.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III, Point 4a)

AI systems that are intended for recruitment or selection are considered high-risk. If such a system discriminates based on race, gender, or sexual orientation, it would be seen as a violation of the standards laid down by this provision, indicating that it could be classified as a high-risk system.

AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion, and termination of work-related contractual relationships, task allocation based on individual behavior or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III, Point 4b)

By extension, AI systems that materially influence decisions within work-related contractual relationships are also classified as high-risk. If such an AI system shows a tendency to discriminate, this might push it into the high-risk category.

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. (Recital 4a)

Recital 4a emphasizes that AI systems and their regulations must adhere to the values, rights, and freedoms specified in European Union Treaties, the Charter, and international human rights law. As a result, an AI system that discriminates based on race, gender, or sexual orientation may violate these principles, which could potentially classify it as a high-risk system under the AI Act.

As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. (Recital 4a)

This underscores that AI should primarily serve societal needs without compromising human autonomy, freedom, or individual rights, including freedom from discrimination. An AI system exhibiting discriminatory tendencies could arguably contravene these principles, stressing its potential classification as a high-risk system.

Safeguards should be provided to ensure the development and use of ethically embedded artificial intelligence that respects Union values and the Charter. (Recital 4a)

Finally, the Recital emphasizes the necessity of safeguards to ensure ethical AI development and use. This suggests that an AI system that encourages discrimination, violating Union values and the Charter, would likely be categorised as high-risk under the AI Act.

In conclusion, the AI Act appears to imply that an AI system showing discriminatory tendencies based on race, gender, or sexual orientation could be considered high-risk, but specific references to bias or discrimination were not found in the referenced provisions which might require a more comprehensive assessment.

Gist 3

High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application. (Article 15)

This suggests that if an AI system displays bias or discrimination, it could be failing to meet the requirements of robustness or accuracy. Such a failure could potentially classify the system as high-risk, given this stipulation that these systems must be designed and developed with a focus on accuracy and security.

All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded … ‘diversity, non-discrimination and fairness’ means that AI systems shall be developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law. (Article 4a)

Under this provision, it is clear the AI Act seeks to ensure that AI systems are developed and used in a non-discriminatory manner, promoting diversity and fairness. Therefore, if an AI system exhibited bias or discrimination, it would directly contradict these principles outlined in the Act.

The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying areas or use-cases of high-risk AI systems where these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. (Article 7)

Given this provision, one could argue that if an AI system does exhibit discriminatory tendencies, there would be potential for its reclassification as a high-risk system. The adverse impact of such discriminatory systems on fundamental rights could draw the attention of the Commission, given their power to define additional high-risk scenarios.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III.4a)

AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III.4b)

These provisions from Annex III point to HR-related systems and decision-making systems related to work contractual relationships as high-risk categories. This suggests that the Act recognises the potential harm of exhibiting bias or discrimination in these areas, and classifies such systems as high-risk.

In conclusion, while the EU AI Act doesn’t explicitly cite systems showing bias or discrimination based on protected characteristics as high-risk, it certainly has provisions that aim to deter the development or use of such systems. Its provisions ensure robustness, accuracy, non-discrimination, and the placement of certain systems in high-risk categories, which could include biased or discriminatory systems. The Commission also has the ability to reclassify systems based on the risks they pose to fundamental rights.

Gist 4

(1b) ‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons; (Article 3)

This definition implies that an AI system that discriminates based on race, gender, or sexual orientation could potentially be defined as a significant risk, given it’s affecting particular groups of people. However, it would depend on the severity and intensity of said discrimination, along with how often it occurs and the duration of its effects.

(ba) a breach of fundamental rights protected under Union law, (Article 3)

This part of the definition of ‘serious incident’ could potentially be applicable if discrimination based on race, gender, or sexual orientation were considered a breach of fundamental rights under Union law. If this was the case, an AI system causing such discrimination could indeed be causing a ‘serious incident’ and could therefore be classed as high-risk.

AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. (Article 6)

This suggests that if an AI system falls under one of the critical areas or use cases listed in Annex III, and it poses a significant risk to health, safety, or fundamental rights, it would be considered high-risk. Depending on the exact details of the AI system and the type and extent of discrimination, this could apply to an algorithm that discriminates based on race, gender, or sexual orientation.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (b) AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III: 4. Employment, workers management and access to self-employment)

This provision clearly describes how certain AI systems used in the context of employment either for recruitment or decision making in regards to tasks, promotion, and termination are considered high-risk systems. An algorithm showing a discriminatory tendency can potentially be covered by this provision if it is intended to evaluate candidates or make decisions in relation to personal traits or characteristics.

AI systems that categorise natural persons by assigning them to specific categories, according to known or inferred sensitive or protected characteristics are particularly intrusive, violate human dignity and hold great risk of discrimination. Such characteristics include gender, gender identity, race, ethnic origin, migration or citizenship status, political orientation, sexual orientation, religion, disability or any other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights of the European Union, as well as under Article 9 of Regulation (EU)2016/769. Such systems should therefore be prohibited. (Recital 16a)

This quote suggests that an AI system that categorizes individuals based on sensitive characteristics including, but not limited to, gender, sexual orientation, and race, poses a risk for discrimination and is a violation of human dignity. This implies that such an AI system could potentially be categorized as high-risk under the AI Act. However, it also mentions that these systems should be outright prohibited, not just classified as high-risk.

In conclusion, if an algorithm used in an AI system does show a tendency to discriminate based on race, gender or sexual orientation, it has the potential to be regarded as a high-risk under the AI Act based on potential breaches of fundamental rights, its classification under high-risk use cases listed in Annex III, and the extent of potential impact on protected categories of people. However, at the same time, if such discriminatory tendencies are detected, the ideal practice under the Act would be to outright prohibit the system.