High-Risk Categorization of AI Products: Legal Q&A and Medical Documentation Tool

Question

I have two AI products in mind that I'm considering building:

  1. An AI-based Q&A where users ask legal questions and AI analyzes regulation's text and surrounding documents (e.g. court rulings) and answers the question. This will be marketed as legal research, and not as legal advice.
  2. An AI-based tool that aids doctors at creating medical documentation. It would help summarize voice conversations, and fill out internal documents for doctors based on rough notes provided by a doctor.

Would any of the two fall into the high-risk category according to the AI Act?

Executive Summary

When considering the development of AI products, it’s paramount to discern if they fall into the high-risk category under the EU AI Act. The analysis provided points to critical components for that determination:

  • Risk Assessment and Purpose: Both AI systems analyzed do not fundamentally qualify as high-risk according to the purpose and potential misuse criteria set by the EU AI Act.
  • Lack of Third-Party Assessment Requirement: Neither system requires third-party conformity assessments, a key indicator of high-risk classification under Article 6(1).
  • Critical Areas and Use Cases: Current functionalities of the AI systems do not fall under the high-risk categories stipulated in Annex III, assuming compliance with data regulations.
  • Ongoing Vigilance Advised: Continuous reassessment is suggested, as future updates to the AI Act could change the risk classification of these AI products.

Entrepreneurs should stay alert to regulatory developments, ensuring their AI offerings align with the evolving legal landscape of the EU AI Act.

Assumptions

To proceed with the analysis, we will make the following assumptions:

  1. Both AI products are designed for use within the European Union and thus fall under the regulatory scope of the EU AI Act.
  2. The AI-based Q&A system for legal research processes data that could potentially relate to legal and possibly personal information but is not intended to provide legal advice.
  3. The AI tool aiding doctors is designed to process health-related data and create or fill out medical documentation that could include sensitive patient information.
  4. Both AI systems are intended to be introduced to the market and not exclusively for internal or personal use by an individual or a single organization.

Legal trace

Understanding the High-Risk AI System Criteria

To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for deployers and affected persons, certain mandatory requirements should apply, taking into account the intended purpose, the reasonably foreseeable misuse of the system and according to the risk management system to be established by the provider. These requirements should be objective-driven, fit for purpose, reasonable and effective, without adding undue regulatory burdens or costs on operators. Recital 42

This implies that the classification of an AI system as high-risk is contingent upon a thorough risk assessment which considers its intended purpose as well as the spectrum of potentially foreseeable abuses.

Requirements should apply to high-risk AI systems as regards the quality and relevance of data sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as well as the environment, democracy and rule of law, as applicable in the light of the intended purpose or reasonably foreseeable misuse of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade. Recital 43

The AI Act outlines essential criteria for systems classified as high-risk, concerning not only their technical aspects but also their broader societal implications.

Assessing the AI-based Q&A and Medical Documentation Tool

Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. Article 6(1)

Neither the AI-based Q&A system nor the medical documentation tool explicitly falls under the category of safety components or requires third-party conformity assessments as stated. Therefore, they are not high-risk per Article 6(1).

In addition to the high-risk AI systems referred to in paragraph 1, AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Article 6(2)

Given the functionalities of both AI systems, there is no direct evidence suggesting they fall into specified critical areas or use cases that pose significant risks, as outlined in Annex III, provided they comply with appropriate data governance and processing standards to mitigate such risks.

Those AI systems that are intended to be used as safety components of products, or are themselves products, covered by Union harmonisation legislation listed in Annex II and that are required to undergo a third-party conformity assessment under that legislation shall be considered high-risk AI systems. Annex III, Point 7(a)

This part of Annex III does not appear to directly apply to the legal Q&A or medical documentation tool. However, the broader impacts and risk potential of the AI products have to be considered within the full context of the Act.

Conclusion for the AI Products

Based on the analysis of relevant articles, recitals, and annexes of the EU AI Act, it seems that neither the AI-based Q&A system nor the medical documentation tool clearly meets the criteria for classification as high-risk AI systems under the current framework. The legal Q&A system does not seem to engage with biometric identification or categorization systems, nor does it handle public services’ eligibility assessments, which are typical high-risk functions outlined in the Act. Meanwhile, the medical tool does not appear to be used by public authorities to manage critical public services such as healthcare, which would place it clearly within the high-risk category.

However, given the dynamic nature of the field and the potential for updates to regulatory requirements, continuous monitoring and assessment against the Act’s evolving criteria are advisable. Providers should particularly watch for any updates related to the list of high-risk AI systems in Annex III, data governance requisites in Article 10, and the potential occurrence of non-compliance risks per Recital 85. Additionally, obligations related to documentation and compliance monitoring set out in Article 11 will significantly impact the provider’s responsibilities should future developments lead to a different classification of the AI systems.