High-Risk Categorization of AI Products: Legal Q&A and Medical Documentation Tool

Question

I have two AI products in mind that I'm considering building:

  1. An AI-based Q&A where users ask legal questions and AI analyzes regulation's text and surrounding documents (e.g. court rulings) and answers the question. This will be marketed as legal research, and not as legal advice.
  2. An AI-based tool that aids doctors at creating medical documentation. It would help summarize voice conversations, and fill out internal documents for doctors based on rough notes provided by a doctor.

Would any of the two fall into the high-risk category according to the AI Act?

Executive Summary

When considering the development of AI products, it’s paramount to discern if they fall into the high-risk category under the EU AI Act. The analysis provided points to critical components for that determination:

  • Risk Assessment and Purpose: Both AI systems analyzed do not fundamentally qualify as high-risk according to the purpose and potential misuse criteria set by the EU AI Act.
  • Lack of Third-Party Assessment Requirement: Neither system requires third-party conformity assessments, a key indicator of high-risk classification under Article 6(1).
  • Critical Areas and Use Cases: Current functionalities of the AI systems do not fall under the high-risk categories stipulated in Annex III, assuming compliance with data regulations.
  • Ongoing Vigilance Advised: Continuous reassessment is suggested, as future updates to the AI Act could change the risk classification of these AI products.

Entrepreneurs should stay alert to regulatory developments, ensuring their AI offerings align with the evolving legal landscape of the EU AI Act.

Assumptions

To proceed with the analysis, we will make the following assumptions:

  1. Both AI products are designed for use within the European Union and thus fall under the regulatory scope of the EU AI Act.
  2. The AI-based Q&A system for legal research processes data that could potentially relate to legal and possibly personal information but is not intended to provide legal advice.
  3. The AI tool aiding doctors is designed to process health-related data and create or fill out medical documentation that could include sensitive patient information.
  4. Both AI systems are intended to be introduced to the market and not exclusively for internal or personal use by an individual or a single organization.