Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)
According to Article 6, the AI system used by the HR company isn’t strictly classified as high-risk since it isn’t used as a safety component nor is it a product covered by the Union harmonization law.
In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. (Article 6)
However, Article 6 also categorizes AI systems as high-risk if they fall into one of the critical areas outlined in Annex III and pose a substantial risk to the health, safety, or fundamental rights of individuals.
- Employment, workers management and access to self-employment:
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III, point 4a)
Referring to point 4 (a) of Annex III, the AI system used by the HR company precisely falls into the description of high-risk AI systems as they are used for the recruitment and selection of individuals for job placements. Any significant error in this system causing severe legal or financial consequences would categorize it as high-risk.
In conclusion, the AI system in question can be deemed high-risk based on its application in a critical area (employment and recruitment) per Annex III, point 4(a) of the EU AI Act. However, this should be confirmed with a comprehensive risk evaluation considering other parts of the AI Act that stress the potential risk to health, safety, or fundamental rights, as instructed by Article 6. There may be potential ambiguities or gray areas that would require further examination or legal advice.
Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)
Article 6 of the EU AI Act provides general classification rules for high-risk AI systems. In this context, the AI system used by the HR company doesn’t seem to be a safety component of a product or a product required to undergo a third-party conformity assessment.
In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment. (Article 6)
However, Article 6 also refers to the high-risk AI systems listed in Annex III. The AI system used for recruitment could potentially fall under this classification if it poses a significant risk to the fundamental rights of individuals, especially if errors lead to discrimination.
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying areas or use-cases of high-risk AI systems where these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law (Article 7)
Article 7 gives the Commission the power to amend Annex III based on their assessment of what poses a significant risk. Therefore, depending on the Commission’s judgement, it’s plausible for the use case of the HR company’s AI system to be considered high-risk in the future.
- Employment, workers management and access to self-employment:
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III point 4(a))
Annex III point 4(a) of the AI Act explicitly mentions AI systems used for recruitment or selection of individuals as high-risk. Therefore, the AI system used by the HR company falls under this high-risk categorization.
A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. (Article 9)
If the HR Company’s AI system is recognized as high-risk, Article 9 mandates establishing a risk management system for the entire lifecycle of the AI system, further emphasizing the potential high-risk nature of the HR company’s AI system.
In conclusion, based on the EU AI Act provisions, the AI system used for recruitment by the HR company can be considered high-risk due to its potential for creating significant legal or financial implications. This is backed by its specific mention in Annex III point 4(a), even if it does not fit the high-risk classification under Article 6(1). The company ought to ensure it meets all the regulatory requirements for such high-risk systems.
The AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)
The above description of what constitutes a high-risk AI system suggests that the AI system used by the HR company for hiring would not be classified as high-risk as it does not qualify under the stated categories.
In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. (Article 6)
However, according to this section of Article 6, the AI system used by the HR company could potentially be considered high-risk if the system falls under the areas and use cases listed in Annex III and if it poses a significant risk to the fundamental rights of the individuals it is used on.
AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests. (Annex III, Point 4)
The AI system used by the HR company for recruitment and selection is explicitly listed as high-risk in Annex III. Therefore, the AI system is high-risk if it meets the requirements outlined in Article 6(2) of posing a significant risk to the fundamental rights of natural persons.
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex III by adding or modifying areas or use-cases of high-risk AI systems where these pose a significant risk of harm to health and safety, or an adverse impact on fundamental rights, to the environment, or to democracy and the rule of law, and that risk is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III. (Article 7)
This passage from Article 7 implies that the Commission has the power to amend the areas or use-cases of high-risk AI systems in Annex III, based on their potential to cause significant harm. Consequently, even if the AI system used by the HR company is not currently included, it could potentially be classified as high-risk in the event of a serious legal or financial repercussion.
AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions or materially influence decisions on initiation, promotion and termination and for personalized task allocation based on individual behavior, personal traits or biometric data, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects, livelihoods of these persons and workers’ rights. (Recital 36)
This helps to confirm, in alignment with Annex III, that AI systems used for employment, particularly for recruitment and the selection of candidates, are classified as high-risk due to their potential impact on individuals’ career opportunities.
In conclusion, based on the above analysis, the AI system used by the HR company for recruitment is indeed high-risk if it fulfils the conditions outlined in Article 6(2) - causing a significant risk to the fundamental rights of natural persons. In this scenario, the potential serious legal or financial consequences are particularly relevant considerations.
“‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” (Article 3 (1))
The AI system used by the HR company for analyzing CVs and selecting job candidates falls under this definition as it operates with a level of autonomy to make recommendations or decisions.
”‘significant risk’ means a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and its the ability to affect an individual, a plurality of persons or to affect a particular group of persons.” (Article 3 (1b))
A potential error in the HR company’s AI system leading to serious legal or financial consequences could be considered as posing a significant risk. However, additional factors need to be assessed such as the severity, intensity, probability, and duration of the effect.
”AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety, or fundamental rights of natural persons.” (Article 6 (2))
This AI system could be deemed high-risk if it falls under a use case established in Annex III and presents a significant risk of harm to the fundamental rights of individuals.
”(a) AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests. (b) AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behavior or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships.” (Annex III - Point 4a and 4b)
The AI system used by the HR company can indeed be classified as high-risk under the EU AI Act. It is described under Annex III - Point 4a and 4b as it is intended to be used for recruitment and selection of individuals, which could have repercussions on work-related contractual relationships.
”As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products and that are listed in one of the areas and use cases in Annex III, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a significant risk of harm to the health and safety or the fundamental rights of persons.” (Recital 32)
According to this recital, the intended purpose of the AI system and the potential risks it presents to the health, safety, or fundamental rights of individuals are crucial elements in its classification as high-risk. The AI system of the HR company in question fulfills these criteria. Therefore, the use of AI by this HR company in the capacity described can indeed be classified as high-risk under the EU AI Act.