The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service, or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques. (ba) the placing on the market, putting into service, or use of biometric categorization systems that categorize natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics. (Article 5)
This prohibition implies that your AI tool may not use subliminal or manipulative techniques. Additionally, the tool must not categorize individuals based on sensitive or protected attributes or characteristics from biometric data. It’s essential to ensure your AI system complies with this regulation to not infringe on patient rights.
- Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation law listed in Annex II; (b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II. (Article 6)
If your AI tool will be used as a safety component in healthcare, or the system itself is viewed as such, it would be considered a high-risk AI system. Your system may then need a third-party conformity assessment before being placed on the market.
Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system. Those measures shall concern in particular, (a) the relevant design choices; (b) data collection processes; (c) data preparation processing operations, such as annotation, labelling, cleaning, updating enrichment and aggregation; (f) examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law. (Article 10)
This underlines the importance of ensuring appropriate data governance measures for the training, validation, and testing datasets used in developing your AI system. The datasets must be thoroughly processed, checked for possible biases, and must adhere to the appropriate use context and the system’s intended purpose.
High-risk AI systems shall comply with the requirements established in this Chapter. The intended purpose of the high-risk AI system, the reasonably foreseeable misuses and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements. (Article 8)
This emphasizes that the AI tool you are developing must comply with the requirements defined in this chapter if it is classified as a high-risk AI system. Compliance must take into account your tool’s intended purpose, reasonably foreseeable misuses, and an appropriate risk management system.
A risk management system shall be established, implemented, documented, and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. The risk management system can be integrated into, or a part of, already existing risk management procedures relating to the relevant Union sectoral law insofar as it fulfills the requirements of this article. (Article 9)
With the high potential risk associated with healthcare data, it is crucial to establish, implement, and document a comprehensive risk management system for your AI tool. This system should be thorough and cater to all stages of your AI system’s lifecycle.
Please ensure you engage with further legal consultation to keep abreast with the full scope of requirements under the AI Act.
AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III, point 1(aa))
This indicates that your AI system which intends to utilize genetic and/or medical data to predict the risk of a person developing certain diseases falls under high-risk AI systems as per the defined characteristics in the AI Act. Specifically, the ability of your AI system to make inferences about a person’s health status based on biometric data subjects it to considerations under this regulation. Although it doesn’t explicitly mention genetic data, it’s likely that it could be treated as biometric data due to its nature and implications.
AI systems intended to be used to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services and essential services, including but not limited to housing, electricity, heating/cooling, and internet, as well as to grant, reduce, revoke, increase or reclaim such benefits and services; (Annex III, point 5(a))
The extent to which this point applies depends on the precise usage of your AI system. If the system’s predictive analysis is being used to decide on an individual’s eligibility for healthcare services or related benefits, then it would be classified as high-risk according to this provision.
AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance; (Annex III, 5(ba))
This excerpt may apply if your AI tool is being used by insurance companies or similar entities to determine or influence decisions about health or life insurance eligibility. For example, if the predictions of developing diseases are intended to be used to determine health insurance premiums or coverage, this would make your AI system high-risk under the AI Act.
The exact regulatory and legal implications would largely depend on how the AI Act gets interpreted and implemented, but you should prepare for rigorous conformity assessments (including documentation and transparency requirements), adopt a risk management system, and ensure post-market monitoring among others as specified in the AI Act. This analysis is based on the draft text and more specific or restrictive regulations might apply. Always consider consulting legal advice for a definitive recommendation.
The following artificial intelligence practices shall be prohibited: (ba) the placing on the market, putting into service or use of biometric categorisation systems that categorise natural persons according to sensitive or protected attributes or characteristics or based on the inference of those attributes or characteristics. This prohibition shall not apply to AI systems intended to be used for approved therapeutic purposes on the basis of specific informed consent of the individuals that are exposed to them or, where applicable, of their legal guardian. (Article 5)
This article of the EU AI Act imposes limitations on the use of AI systems for categorizing individuals based on sensitive attributes, which could include genetic information. Therefore, if your intention is to use this AI system for diagnosing diseases, you need to ensure that it is being used for therapeutic purposes and that you have obtained the explicit, informed consent of the individuals whose data you are using.
Training, validation and testing data sets shall be subject to data governance appropriate for the context of use as well as the intended purpose of the AI system. Those measures shall concern in particular, (f) examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations (‘feedback loops’) and appropriate measures to detect, prevent and mitigate possible biases; (Article 10)
It’s critical that you implement robust data governance measures when constructing and refining your AI system. For instance, it is necessary to examine your training, validation, and testing data for potential biases. This could include biases that might not only affect health and safety outcomes but also those that could negatively impact fundamental rights or lead to illegal discrimination.
To the extent that it is strictly necessary for the purposes of ensuring negative bias detection and correction in relation to the high-risk AI systems, the providers of such systems may exceptionally process… (Article 10)
Further to this, if you must process sensitive categories of data such as genetic data for bias detection and correction in your AI system, Article 10 sets out a small number of conditions allowing for this. However, these conditions must be strictly adhered to, and include requirements for pseudonymization, security measures, and data deletion once the bias detection and correction process is finished.
(aa) AI systems intended to be used to make inferences about personal characteristics of natural persons on the basis of biometric or biometrics-based data, including emotion recognition systems, with the exception of those mentioned in Article 5; (Annex III, Point 1)
Annex III clarifies that AI systems making inferences about personal characteristics based on biometric data could be considered “high-risk” AI systems under the EU AI Act. Since your AI system is potentially making inferences about health conditions based on genetic data, it may fall within the high-risk classification.
- Access to and enjoyment of essential private services and public services and benefits… (Annex III, Point 5)
Finally, the AI system may fall under the classification of high-risk if it is used by public authorities or insurance issuers to make decisions or impact materially decisions regarding health and life insurance eligibility, or the provision of healthcare services and essential services.
Overall, using AI to make health predictions based on genetic data requires careful adherence to the EU AI Act, particularly if your system is classified as high-risk. This includes provisions regarding prohibition of certain AI practices, data governance, and handling of sensitive data categories. In addition, it may be necessary to consult additional privacy legislation like the GDPR.
In accordance with the EU AI Act, here are the key regulations and corresponding interpretations related to your AI tool for predicting diseases based on patients’ medical history and genetic data:
“Artificial intelligence system (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, influencing physical or virtual environments.” (Article 3:1)
Your tool that predicts diseases using medical history and genetic data falls under this definition. As a result, the AI Act regulates it.
‘Provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark. (Article 3:2)
As the developer of this AI tool, you’d be classified as a ‘provider’. Consequently, you’re subject to regulations, including ethical considerations involved in using sensitive genetic and health data.
Now let’s consider its classification:
AI systems intended to be used by or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, including healthcare services. (Annex III section 5a) AI systems intended to be used for biometric identification of natural persons. (Annex III section 1a)
Your AI tool could fall under ‘high-risk AI systems’ due to its implications for healthcare and its use of genetic data, which could be deemed biometric data.
For high-risk AI systems, various obligations are set forth:
High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria. (Article 10) The user should be able to explain the decisions taken by the AI system. (Article 13) The AI system, the provider itself, or the user informs the natural person exposed to an AI system about their rights to object against the application of such systems to them and to seek judicial redress against decisions taken by or harm caused by AI systems. (Article 52.1)
Regarding the abovementioned regulations, your AI tool must be developed using high-quality data sets, allow users (doctors) to understand and explain its decisions, and inform patients about their rights, including the right to seek judicial redress.
It’s worth noting that if your tool undergoes modifications after its initial deployment, it may need to undergo a new conformity assessment:
High-risk AI systems that continue to learn after being placed on the market or put into service shall undergo a new conformity assessment procedure whenever they are substantially modified. (Article 43)
Therefore, a substantial change to your AI system may require a new conformity assessment as per the AI Act.
Please note that these requirements are subject to change according to technical progress. Also, remember that your use of sensitive health/genetic data will likely be regulated by GDPR alongside the AI Act. We recommend consulting with GDPR and healthcare regulatory experts to ensure full compliance.