We are a startup focused on developing an AI-driven tool for resume analysis. Our innovative approach leverages a combination of technologies such as Optical Character Recognition (OCR), Computer Vision, and Natural Language Processing (NLP). While we have crafted custom models tailored to our specific needs, we also integrate off-the-shelf solutions, notably from platforms like Microsoft Azure. Furthermore, we're in the process of designing a generative AI model module. Once complete, this module will not only analyze resumes but also suggest modifications and provide commentary. Given this context, could you provide insights into the potential legal risks and obligations we may encounter?

Gist 1

Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II;

(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.

(Article 6)

Given your system’s use of Optical Character Recognition (OCR), Computer Vision, and Natural Language Processing (NLP), it is very likely that it could fall under high-risk AI system category, even more if its function is crucial to the safety of a product or if it requires a third-party conformity assessment before being placed on the market. If this is the case, you will face additional requirements and scrutiny as outlined in the AI Act.

AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests; (Annex III: 4(a))

The tool you are developing falls under the category of “high-risk” AI systems as it is intended to be used for recruitment or selection of natural persons. This specifically includes the selection of candidates based on analysis of their resumes which you are performing. As a developer and user of a high-risk AI system, you will be subject to strict regulations under the AI Act. This may involve complying with transparency requirements, conducting risk assessments, data governance and management obligations, and obligations related to ensuring the system can be properly overseen by humans.

The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person‌’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person‌’s or a group of persons‌’ behaviour by appreciably impairing the person‌’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; ‒ (ba) the placing on the market, putting into service or use of biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics.

(Article 5)

You should be very careful not to intrude on the candidate’s personal or sensitive information while processing resumes. Any techniques or strategies perceived as subliminal or overly manipulative could cross a dangerous legal line. Also, biometric categorization, while not directly applicable to your use case, calls attention to the importance of respecting protected attributes or characteristics.

AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships; (Annex III: 4(b))

If the AI system you are developing is intended to suggest modifications and provide commentary on resumes, and this can be defined as materially influencing decisions relating to recruitment and selection, it will fall under this category. If these modifications and commenting capabilities are used in such a way that they impact the job application process or can result in the initiation, promotion, or termination of work-related contractual relationships, you will also have to observe the above-mentioned requirements as per the AI Act.

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use.

(Article 52)

This essentially establishes a right for users to know when they are interacting with an AI system which could be applicable to your use case. Your software should signify clearly and proactively to the user that they are interacting with an AI system.

Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI system which produces legal effects or similarly significantly affects him or her in a way that they consider to adversely impact their health, safety, fundamental rights, socio-economic well-being or any other of the rights deriving from the obligations laid down in this Regulation, shall have the right to request from the deployer clear and meaningful explanation pursuant to Article 13(1). (Article 68c)

This statement further emphasizes the need for transparency when dealing with AI systems. In the context of a resume analysis tool, you should provide explanations or justifications for the insights produced by the tool on request.

A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licenses, as a service, as well as other distribution channels. ‒ Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialize a foundation model into a generative AI system, shall in addition

(a) comply with the transparency obligations outlined in Article 52(1), (b) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law

(Article 28b)

This is particularly relevant given you’re developing a generative AI model module. This article has outlined obligations for this type of providence such as compliance with transparency or rules against generating content breaching Union law.

From a legal risk perspective, you should ensure your AI system is compliant with the AI Act. This may involve conducting an impact assessment to identify possible risks associated with its use and taking appropriate measures to mitigate these risks. Furthermore, you must bear in mind data protection and privacy-related legal obligations under the GDPR, especially with respect to the processing of personal data.

Gist 2

Upon analyzing the new EU AI Act, several areas pertain to your startup’s AI tool for resume analysis. Here are some insights into the potential legal risks and obligations:

“AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests;” (Annex III, Point 4(a))

Your AI system appears to fall under the category described in Annex III, Point 4(a) as a high-risk AI, given its function in screening or filtering applications and evaluating candidates.

”Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:” (Article 6)

According to Article 6, an AI system would be high-risk if it is in use as a safety component of a product, or if the AI system was itself a product, covered by Union harmonisation law. Additionally, the product must be required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product.

”High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application.” (Article 10)

In relation to your high-risk AI system’s data handling, it’s required that high-quality datasets are used for training, validation, and testing. An emphasis on appropriate data governance is necessary, focusing on issues like potential bias and data security.

”The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date.” (Article 11)

Before deploying your AI system, you need to prepare detailed technical documentation demonstrating compliance with all requirements set out in the Act’s Title III, Chapter 2.

”High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning.” (Article 13)

Transparency is a pivotal requirement for high-risk AI systems. They should be designed to ensure comprehensibility of their functioning to both users and providers.

”Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (\u2018deep fake\u2019), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated…” (Article 52, Paragraph 3)

If your planned generative AI module creates any content altering the applicants’ information or feedback, it’s necessary to disclose that the content has been artificially generated or manipulated.

”The following artificial intelligence practices shall be prohibited:” (Article 5)

The AI Act prohibits certain AI practices, especially those that employ deceptive or manipulative techniques which could impact a person’s decision-making abilities. If the off-the-shelf solutions you integrate have such practices, it’s a violation of the AI Act.

”Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system.” (Article 10.2)

If your system uses computer vision to analyze photos, be cautious about the privacy implications of processing biometric data.

”For high-risk AI systems listed in point 1 of Annex III…the provider shall opt for one of the following procedures; (a) the conformity assessment procedure based on internal control referred to in Annex VI; or (b) the conformity assessment procedure based on assessment of the quality management system…” (Article 43)

Remember to undergo a conformity assessment procedure for your system after selecting the most suitable one for your startup.

”Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system…” (Article 61)

Finally, it’s mandatory to establish a post-market monitoring system to evaluate your system’s performance throughout its lifetime and to ensure its continuous conformity with legal requirements.

This is a preliminary analysis and certain aspects might require more exploration, contingent on more details about the AI system. Consultation with an attorney is always recommended for comprehensive legal advice. Changes to your system or processes might be necessary for complete legal compliance, and I would be happy to help further in evaluating those modifications.

Gist 3

“artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;” (Article 3)

Given the definition under Article 3, your AI-driven tool for resume analysis fits under the concept of an AI system as per EU AI Act. The tool operates with autonomy, generates outputs such as analysis of resumes and suggested commentary, and this undoubtedly influences a virtual environment.

”Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use.” (Article 52)

Under Article 52, it is required that for any AI system intending to interact with natural persons, clear information should be conveyed to the person that they are interacting with an AI system. Therefore, in your case, the users of your tool, such as employers or job seekers, would need to be informed that they’re interacting with an AI system.

”AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner” (Article 52)

Article 52 emphasizes the requirement of timely, clear and intelligible communication to natural individuals regarding their interaction with an AI system. This transparency aspect could affect your startup if users aren’t clearly informed that they’re interacting with an AI-driven tool.

”AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons.” (Article 6)

According to Article 6, if your AI system falls under the critical areas and use cases referred to in Annex III and poses a significant risk to the fundamental rights of individuals, it might be considered high-risk. However, without explicit definitions outlined in Annex III, it might be difficult for your startup to discern if it indeed falls under ‘high-risk’.

”The following artificial intelligence practices shall be prohibited (…) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques” (Article 5)

Under Article 5, the use of AI systems that deploy subliminal techniques or purposefully manipulative or deceptive techniques is prohibited. This is noteworthy for your company as your AI tool should not in any form manipulate or deceive users or potential job candidates during the resume analysis and recommendation process.

”AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests;” (Annex III: 4a)

The AI Act classifies the functionality of your AI tool that screens resumes and evaluates candidates as a high-risk AI system. This will likely have several implications for your startup. For instance, you may require a conformity assessment before the product can be put on the market (Article 41). You should establish an adequate risk management system (Article 9), keep records of the AI system’s activity (Article 12), and use high quality and relevant datasets for training your model (Article 10). Your models should be transparent and provide meaningful information about their capabilities and limitations (Article 52), avoid discrimination (Article 10(2)), and users should be allowed to oversee their operation (Article 14).

”AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behavior or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships;” (Annex III: 4b)

The potential generative AI model module you’re developing might fall under this category if you aim to use it, for example, to provide suggestions or modifications relevant to job promotion or changing work related contractual terms. The legal responsibilities outlined in 4a would apply here as well.

In conclusion, some potential legal risks and obligations your startup may encounter include ensuring transparency regarding user interaction with an AI system (Article 52), and the prohibition of deploying manipulative techniques (Article 5). According to Annex III, your tool could be classified as high-risk due to its recruitment and selection functionalities. If classified as a high-risk AI system, additional requirements regarding risk management, record keeping, data quality, and other aspects need to be considered (Article 9, 10, 12, 14, 41, 52). Remember to keep in mind the general data protection regulation (GDPR) which regulates the processing of personal data, including processing by AI.

Gist 4

> ‘‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;
> *(Article 3)*

Your company's AI-based resume analysis tool falls under this definition. Implementing features like Optical Character Recognition (OCR), Computer Vision, and Natural Language Processing (NLP), your system fulfills explicit objectives, generates outputs (resume analysis), and influences a virtual environment.

>  The following artificial intelligence practices shall be prohibited: the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques ...
> *(Article 5)*

Your proposed generative module that provides modification suggestions and commentary on resumes should be carefully designed, ensuring it doesn’t use manipulative or deceptive methods. This could be potentially problematic under Article 5 if your system’s suggestions could influence decisions. Transparency about these recommendations being generated by an AI system should be emphasized.

> Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use.
> *(Article 52)*

Your tool needs to inform users clearly and promptly that they are interacting with an AI system. You can ensure this by adding a clear disclaimer or notice within the user interface of your tool. This will comply with Article 52’s transparency obligations.

> the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II...
> *(Article 6)*

Should your system be classified as high-risk under the criteria mentioned in Article 6, it may be subject to additional regulations and conformance assessments. However, unless it's determined that your system poses significant risk to health, safety, or fundamental rights, it's likely not considered high-risk based on the available information.

> 4. Employment, workers management and access to self-employment:
>
> (a)  AI systems intended to be used for recruitment or selection of natural persons, notably for placing targeted job advertisements screening or filtering applications, evaluating candidates in the course of interviews or tests;
>
> (b)  AI systems intended to be used to make or materially influence decisions affecting the initiation, promotion and termination of work-related contractual relationships, task allocation based on individual behaviour or personal traits or characteristics, or for monitoring and evaluating performance and behavior of persons in such relationships;
> *(Annex III - point 4)*

Given your AI tool's main functionality, which is analyzing resumes, it's important to take into account that the European Union considers these types of AI applications as high risk. Particularly, your tool may fall under point 4(a) since it is aimed for recruitment and applicant screening. The law requires such AI systems to follow strict regulations to avoid unfair bias and protect transparency in the recruitment process.

Your generative AI model module, which can suggest modifications and provide commentary, might also come under scrutiny under point 4(b). If your system is making or materially influencing decisions that could impact initiation, promotion or termination of work-related relationships, this may also be seen as high risk.

>  3. Education and vocational training:
> 
> (a)  AI systems intended to be used for the purpose of determining access or materially influencing decisions on admission or assigning natural persons to educational and vocational training institutions;
>*(Annex III - point 3)*

Another point to consider is whether your system will be used in relation to educational or vocational training institutions. Under point 3(a), if your system plays a role in determining access or influencing decisions on admission to these institutions, it would be classified as a high-risk AI system.

Please be aware that the above implies heavyweight requirements, such as performing a conformity assessment, maintaining quality management systems and certain post-market monitoring duties among other responsibilities.

However, I need more information about the specific Articles in Title III, Chapter 2 and Chapter 3 to provide a comprehensive legal analysis. Particularly, these Titles and Chapters encompass detailed obligations such as data and record keeping, providing information to users, human oversight, robustness, accuracy and security of AI systems, and conformity assessments among others, that may directly apply to your startup. I recommend seeking legal advice to navigate these specific regulations and to stay compliant with the EU AI Act’s provisions.