Which of the amendments are related to high-risk systems and startups? Are there non-obvious ways an AI model can be classified as high-risk?

The amendments strengthen transparency and explainability requirements for AI systems, especially those interacting with people or generating synthetic data. This may create extra work for startups building consumer-facing AI products to implement technical measures for explanations and manage consent. Some exceptions are made for creative works, journalism, and research with consent.

The updates also broaden the scope of high-risk AI systems, including all biometric identification/categorization and some credit-scoring systems. Emotion recognition is prohibited for government use. The new rules require datasets and models to be shared with notified bodies assessing high-risk AI. Public authorities and large online platforms must register using high-risk AI.

Some non-obvious ways an AI model could now be high-risk:

The amendments overall strengthen individuals’ rights regarding AI and increase obligations on companies and governments deploying AI. With more burdensome requirements and penalties around high-risk systems, innovation could be stifled if applied too broadly. But with more trust and accountability, the European public and legislators may become more receptive to AI over time.

Here is the changelog for the amendments to the AI Act from June 14, 2023, grouped by theme:

Transparency and explainability:

High-risk AI systems: