I have two AI products in mind that I’m considering building:
An AI-based Q&A where users ask legal questions and AI analyzes regulation’s text and surrounding documents (e.g. court rulings) and answers the question. This will be marketed as legal research, and not as legal advice.
An AI-based tool that aids doctors at creating medical documentation. It would help summarize voice conversations, and fill out internal documents for doctors based on rough notes provided by a doctor.
Would any of the two fall into the high-risk category according to the AI Act?
Understanding the Legal Question
The legal question concerns the classification of two AI products under the EU AI Act. The user is looking to understand whether either of the products would be considered high-risk according to the AI Act’s definitions and criteria. The first product is a legal research tool that responds to questions about regulations and legal texts, while the second assists doctors with medical documentation. Determining whether these AI systems are high-risk is crucial for compliance with any additional regulatory obligations that apply to such systems.
Ambiguities in the Legal Question
The ambiguities in the question include:
The specific functionalities and capabilities of the AI-based Q&A system for legal research, such as the extent to which it processes sensitive or personal data.
The manner and the extent to which the medical documentation tool processes health data or other sensitive information.
The jurisdictions and sectors within which the AI tools will be deployed and marketed, as different regions may have additional requirements or interpretations of high-risk.
The potential consequences or negative impacts of the systems’ outputs, and whether these could significantly affect individuals’ rights and freedoms.
Assumptions for the Legal Analysis and the Plan for the Junior Lawyer
Note: The junior lawyer is what we call a subsystem of Hotseat that completes helper tasks
To proceed with the analysis, we will make the following assumptions:
Both AI products are designed for use within the European Union and thus fall under the regulatory scope of the EU AI Act.
The AI-based Q&A system for legal research processes data that could potentially relate to legal and possibly personal information but is not intended to provide legal advice.
The AI tool aiding doctors is designed to process health-related data and create or fill out medical documentation that could include sensitive patient information.
Both AI systems are intended to be introduced to the market and not exclusively for internal or personal use by an individual or a single organization.
Plan for the Junior Lawyer:
1. Introduction to EU AI Act and High-Risk AI Systems:
Begin by reading Recitals 42 and 43 to understand the general scope and intent of the regulation regarding health, safety, and fundamental rights.
2. Examine Fundamental Concepts and Definitions:
Review Article 3 definitions (1) for ‘AI system’, (2) for ‘provider’, (7) for ‘distributor’, and (14) for ‘safety component of a product or system’.
3. Define High-Risk AI Systems:
Analyze Article 6 to determine the criteria that an AI system must meet to be considered high-risk.
4. Study Annex III for Specific High-Risk AI Systems:
Scrutinize Annex III, points 1(a) and (aa) regarding biometric identification and categorization systems, as well as point 5(a) concerning the eligibility of natural persons for public services, which may be relevant for AI systems processing legal or health data.
5. Determine Relevance of High-Risk Criteria to Legal Q&A AI System:
Apply insights from step 4 to determine if the legal Q&A AI system could fall under the high-risk category based on its functionalities and potential impacts on fundamental rights related to privacy and data protection (refer to Article 10 and Recital 76).
6. Assess Medical Documentation AI Tool Under High-Risk Category:
Considering that the medical documentation tool processes health data, compare its functionalities to Annex III, point 7(a) on the management of critical public services (healthcare), to decide if it should be classified as high-risk (Article 10 and Recital 76 may apply).
7. Explore Specific Risk Criteria and Recitals:
Consider Recital 85 to assess whether non-compliance risks, such as discriminatory impacts or harm to health and safety, may elevate an AI system to high-risk.
8. Documentation and Compliance Considerations:
Note the requirements for documentation and compliance monitoring for high-risk AI systems in Article 11.
9. Compile the Legal Analysis:
Use the findings to create a legal analysis report detailing whether the described AI systems would be high-risk under the EU AI Act, addressing the implications for the user’s obligations under the act.
Definitions and Terms from the EU AI Act:
AI System: A machine-based system that operates with varying levels of autonomy to generate predictions, recommendations, or decisions influencing environments.
Provider: The natural or legal person that develops or has an AI system developed to place it on the market or into service.
Distributor: A person in the supply chain that makes an AI system available on the market up to the stage of end-use.
Safety Component: A component of a product or a system used to protect the health and safety of persons.
Question Clarity Rating
Somewhat clear
Clarity Rating Explanation
The original question provides a clear intent and describes two specific scenarios in which the user is considering developing AI products. However, it lacks detailed context regarding the use cases, the nature of the user interactions with these systems, and the type of data processing involved, all of which are pertinent to establish whether they fall under the high-risk category according to the EU AI Act.