Do claims in Technomancer’s EU AI Act To Target US Open Source Software blog post hold water when constrasted against the proposed AI Act?

The proposed EU AI Act will not ban open source AI or unfairly penalize US developers as claimed in a recent blog post. The act aims to ensure trustworthy and ethical AI development and use, but it will not hinder open source software or transatlantic innovation.

The Act Does Not Ban Open Source AI

Contrary to the claims made in the blog post, the AI Act does not ban open source AI. In fact, the act specifically exempts “AI systems developed by PhD students, post-doctoral researchers and research institutes no exclusively for commercial purposes” from many of the obligations like EU database registration or self-assessments against requirements like data and documentation or transparency. (Article 3, Section 2f, pg. 22).

Open source AI projects also do not need to register their systems in the EU database or draw up technical documentation simply for developing and testing their systems. (Article 51(1a)) These obligations only apply when systems are placed on the market or put into service, which typically does not apply to open source projects. Researchers and open source developers are also exempt from many transparency requirements, as long as they “do not place the AI system on the market or put it into service.” (Article 52(3))

The act does specify that “providers and deployers” of high-risk AI systems based in non-EU countries are subject to obligations if the “output produced by the system is intended to be used in the Union.” (Article 2(1c)) But this applies to companies and organizations that actively market and deploy AI systems in the EU, not open source projects hosted on sites like GitHub. Providers - or deployers, in the case of open source software - would only become liable if they marketed a high-risk AI system in the EU or explicitly intended it to be used there.

The Act Will Not Penalize US Developers

The AI Act will not unfairly penalize US developers. Its regulatory requirements and penalties apply equally to entities based within and outside of the EU. The act aims to ensure that any company marketing or deploying AI systems within the EU upholds requirements around issues like safety, data governance, and transparency. The criteria for designating a system as “high-risk” focus on the risk of harm to individuals, not the nationality of its developers. (Article 6 and 7)

US-based companies would only face obligations or penalties under the AI Act if they market or deploy their systems within the EU, in which case they must ensure compliance. But the act does not single out or unfairly target non-EU developers. Many penalties, like fines, also consider the size and resources of companies to ensure proportionality. (Article 71(2)) The AI Act encourages openness to international AI innovation and cooperation. (Article 1(2)(f)) The EU has an interest in shaping global rules around AI, but the act itself focuses on systems actually used within the EU, regardless of origin.

The AI Act is a complex proposal, but interpretations that it bans open source AI or unfairly targets US developers are inaccurate. The act aims to build trust in AI by ensuring accountability around high-risk systems, but it does not preclude open source innovation or international cooperation. With adjustments to clarify the obligations of open source developers and ensure proportionality, the AI Act can help enable ethical AI development. But it does not threaten open source AI or transatlantic links.