Which of the amendments are related to high-risk systems and startups? Are there non-obvious ways an AI model can be classified as high-risk?
The amendments strengthen transparency and explainability requirements for AI systems, especially those interacting with people or generating synthetic data. This may create extra work for startups building consumer-facing AI products to implement technical measures for explanations and manage consent. Some exceptions are made for creative works, journalism, and research with consent.
The updates also broaden the scope of high-risk AI systems, including all biometric identification/categorization and some credit-scoring systems. Emotion recognition is prohibited for government use. The new rules require datasets and models to be shared with notified bodies assessing high-risk AI. Public authorities and large online platforms must register using high-risk AI.
Some non-obvious ways an AI model could now be high-risk:
- If used to generate or spread misinformation or “deepfakes”
- If used by dating apps or insurance firms for profiling people
- If an open-source or “off-the-shelf” model is used in a context that makes its outputs high-risk
The amendments overall strengthen individuals’ rights regarding AI and increase obligations on companies and governments deploying AI. With more burdensome requirements and penalties around high-risk systems, innovation could be stifled if applied too broadly. But with more trust and accountability, the European public and legislators may become more receptive to AI over time.
Here is the changelog for the amendments to the AI Act from June 14, 2023, grouped by theme:
Transparency and explainability:
- Article 13(3)(d) amended to require providers to implement technical measures to facilitate the interpretation of AI system outputs also by deployers, not just users.
- Article 52(1) amended to require providers of AI systems interacting with or generating content for natural persons to inform them that they are interacting with or exposed to an AI system. Applies unless obvious or consent was obtained.
- Article 52(2) amended to require users of emotion recognition and biometric categorization systems to inform persons exposed to the systems and obtain their consent. Does not apply if permitted by law for criminal offenses.
- Article 52(3) amended to require users generating or manipulating image, audio or video content with AI to disclose it was artificially created or manipulated. Applies unless for creative works, journalism, or research and consent was obtained.
High-risk AI systems:
- Article 6(2) amended to specify the Commission will issue guidance on when high-risk AI systems’ outputs would pose a significant risk of harm.
- Annex III amended to clarify biometric identification and categorization systems are high-risk, except for verification and some healthcare. Prohibits government use of emotion recognition.
- New point 8 in Annex III prohibits AI for influencing democratic processes or voters without consent.
- New point 5(b) in Annex III adds AI for credit scores as high-risk.
- Article 43 amended to allow notified bodies to require access to high-risk AI training, validation, and testing datasets to assess conformity, including through remote access. Can require access to models and parameters if necessary.
- Article 51 amended to require public authorities and large online platforms to register the use of high-risk AI systems in an EU database. Others may register voluntarily.