Is there a risk that AI Act will inadvertently contribute to regulatory capture by imposing bureaucratic requirements that will favour only the largest companies?

Gist 1

Here is the analysis of the articles relevant to small and medium-sized entities (SMEs) in the AI Act:

The specific interests and needs of SMEs shall be taken into account when setting the fees for third-party conformity assessment under this Article, reducing those fees proportionately to their size and market share; (Article 43)

This quote shows that the AI Act does take steps to lessen the possible burden on SMEs regarding fees for third-party conformity assessments. The Act suggests a decrease in such fees in proportion to the size and market share of SMEs. This reflects an awareness of scale in regulation, which could help mitigate the risks of regulatory capture.

Member States shall establish at least one AI regulatory sandbox at national level, which shall be operational at the latest on the day of the entry into application of this Regulation This sandbox can also be established jointly with one or several other Member States; (Article 53)

This segment indicates that the AI Act mandates the establishment of AI regulatory sandboxes at a national level across all Member States. These sandboxes provide a controlled environment where AI systems can be tested and developed. They present a beneficial provision for SMEs since they offer a supportive environment that could potentially lessen costs related to early-stage development and testing.

provide SMEs and start-ups, established in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions; (Article 55)

This passage implies that Member States are obliged to give priority access to SMEs and start-ups for AI regulatory sandboxes. Such a clause could be of significant help for SMEs and start-ups as it allows them to innovate and test their AI systems in a regulated environment without severe regulatory pressures. This could also curb the risk of larger companies exerting unfair influence over the regulatory environment.

While it’s true that the AI Act does impose certain bureaucratic requirements that may be more easily tackled by larger companies, it also includes provisions that aim to support SMEs and keep the environment competitive and innovative. This balance can help mitigate the risk of regulatory capture.

Gist 2

The purpose of this Regulation is to promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy and the rule of law, and the environment from harmful effects of artificial intelligence systems in the Union while supporting innovation (Article 1)

The AI Act, as stated in Article 1, aims to strike a balance between promoting innovation and maintaining high standards for health, safety, and fundamental rights. However, maintaining this equilibrium may introduce complex regulations that could impact smaller companies negatively in terms of resources required for compliance.

For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider shall opt for one of the following procedures (Article 43)

Article 43 presents a requirement for high-risk AI system providers to choose a compliance procedure. This might create a disadvantage for smaller organizations due to rigorous compliance processes, which might be easier for larger companies to handle with their typically higher resources.

AI regulatory sandboxes shall, in accordance with criteria set out in Article 53a, provide for a controlled environment that fosters innovation and facilitates the development, testing, and validation of innovative AI systems for a limited time before their placement on the market (Article 53)

In contrast, the AI Act also introduces the concept of AI regulatory sandboxes as per Article 53. These are designed to foster innovation in a supportive environment, reducing bureaucratic constraints and offering smaller companies a more equitable opportunity to test and rollout their AI systems.

Member States shall undertake the following actions:…(a) provide SMEs and start-ups, established in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfill the eligibility conditions (Article 55)

Moreover, Article 55 clearly indicates an intent to aid smaller organizations by providing them priority access to AI regulatory sandboxes. It indicates the AI Act’s attempt to reduce potential barriers for these companies actively.

(ca) foster the participation of SMEs and other relevant stakeholders in the standardization development process… The specific interests and needs of the SMEs, start-ups and users shall be taken into account when setting the fees for conformity assessment under Article 43, reducing those fees proportionately to development stage, their size, market size and market demand. (Article 55)

Article 55 also defines a provision for SMEs and startups where the AI Act seeks to limit the potentially high fees associated with compliance.

In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. (Recital 85)

Recital 85 highlights the regulation’s adaptability to change its components as necessary. However, it doesn’t actively discuss mechanisms or controls to prevent the legislation from being swayed unduly in favor of larger corporations.

It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making[^58]. (Recital 85)

Moreover, Recital 85 lays emphasis on the importance of a diverse set of consultations carried out during its preparatory stage. This would theoretically ascertain more measured risk of regulatory capture, ensuring that multiple stakeholders have a say in the process.

These consultations should involve the participation of a balanced selection of stakeholders, including consumer organisations, civil society, associations representing affected persons, businesses representatives from different sectors and sizes, as well as researchers and scientists. (Recital 85)

The emphasis on inclusivity in these consultations across various stakeholders, including business representatives from different sectors and sizes, researchers, scientists, among others, suggests the aim to consider all perspectives while shaping the regulation.

In conclusion, there are complexities inherent in the AI Act that could favor larger corporations given their typically more significant resources for dealing with bureaucratic requirements and compliance procedures. However, the act has provisions, such as the AI regulatory sandboxes and priority access for SMEs, that could mitigate regulatory capture. Diverse consultations are also part of the law-making process, further diminishing the chances of regulatory capture.

Gist 3

“For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has applied harmonised standards referred to in Article 40, or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following procedures.” (Article 43)

The requirements for providers of high-risk AI systems under Article 43 necessitate procedural adherence which could lead to increased costs for smaller companies due to procedural alignments, potential changes in product designs, and increased regulation compliance activities.

”Providers of high-risk AI systems shall have a quality management system in place that ensures compliance with this Regulation. It shall be documented in a systematic and orderly manner in the form of written policies, procedures or instructions, and can be incorporated into an existing quality management system under Union sectoral legislative acts. It shall include at least the following aspects” (Article 17)

The incumbent need for a quality management system as expressed in Article 17 could place a significant burden on smaller businesses due to associated bureaucratic requirements.

”Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.” (Article 61)

Continuous monitoring, as suggested in Article 61, might pose another obstacle for smaller businesses due to requisite infrastructural investments and resource allocation necessary for data collection and analysis.

”The specific interests and needs of SMEs shall be taken into account when setting the fees for third-party conformity assessment under this Article, reducing those fees proportionately to their size and market share;” (Article 43)

While considerations for SMEs and reduction of associated fees have been accorded in the Act, as seen in this part of Article 43, such provisions may be insufficient if other obligations are disproportionately burdensome.

”Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation.” (Recital 55)

This part of the Act places the obligation of AI system compliance under the responsible manufacturer or provider, which could more easily be managed by larger corporations with higher resource power.

”In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on AI literacy, awareness raising and information communication. Member States shall utilise existing channels and where appropriate, establish new dedicated channels for communication with SMEs, start-ups, user and other innovators… Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. The Commission shall regularly assess the certification and compliance costs for SMEs and start-ups, including through transparent consultations with SMEs, start-ups and users and shall work with Member States to lower such costs.” (Recital 73)

Recital 73 outlines active provisions aimed at promoting and protecting AI innovations, especially among small-scale providers, thus indicating the Act’s intent to mitigate the risk of regulatory capture by larger entities.

”Member States shall undertake the following actions: provide SMEs and start-ups, established in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfill the eligibility conditions;” (Article 55)

In Article 55, AI regulatory sandboxes are mandated to be provided primarily for SMEs and startups.

”The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into account the interests of SMEs and start-ups and their economic viability;” (Article 71)

Concerning penalties, Article 71 outlines a clear favor towards ensuring economic viability of SMEs and startups.

”Non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5 shall be subject to administrative fines of up to 40,000,000 EUR or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher” (Article 71)

Despite provisions protecting SMEs and startups’ economic viability, penalties for non-compliance could be severe as defined in Article 71.

”The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system…” (Annex IV)

The mandatory comprehensive technical documentation needs, specified in Annex IV, may pose a challenge for smaller entities due to rigorous bureaucratic requirements.

Ultimately, while the AI Act was developed with the noble intention of ensuring safety and transparency in AI development and deployment, there is indeed a plausible risk of regulatory capture. Primarily, the bureaucratic requirements posed by the Act may inadvertently favor larger corporations with more resources. While the Act does have provisions aimed at making it feasible for smaller companies, the landed implementation costs might still pose substantial barriers. However, with proper enforcement of the Act’s stipulations regarding SME and startup interests, it’s feasible that this risk might be mitigated to an extent.

Gist 4

This Regulation applies to providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country. (Article 2)

This demonstrates that all AI system providers, regardless of their size or location, will be subject to the AI Act if they operate in the EU. This ensures equal regulatory treatment across various providers, yet the risk of regulatory capture depends on the complexity of the set requirements.

Member States shall establish at least one AI regulatory sandbox at national level, which shall be operational at the latest on the day of the entry into application of this Regulation. (Article 53)

The establishment of AI regulatory sandboxes allows for close monitoring of AI systems and can act as a safeguard against regulatory capture. However, the accessibility and distribution of these sandboxes need to be fair to avoid any disadvantages for smaller enterprises.

Member States shall promote research and development of AI solutions which support socially and environmentally beneficial outcomes… Member States shall support civil society and social stakeholders to lead or participate in such projects; (Article 54a)

By promoting a wide range of developers and uses for AI, the AI Act can potentially reduce the risk of power concentration, and consequently, regulatory capture.

Member States shall undertake the following actions: provide SMEs and start-ups, established in the Union, with priority access to the AI regulatory sandboxes, to the extent that they fulfil the eligibility conditions; (Article 55)

SMEs and startups getting priority access to sandboxes can guard against the risk of regulatory capture by making the advantages of AI systems more accessible to smaller companies.

In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, applicable to infringements of this Regulation by any operator, and shall take all measures necessary to ensure that they are properly and effectively implemented and aligned with the guidelines issued by the Commission and the AI Office pursuant to Article 82b. (Article 71)

Penalties ensure the enforcement of regulations, but to avoid regulatory capture, it’s crucial that these penalties are proportionately applied across all organizations. Smaller entities could be disproportionately affected by high fines which could favor larger corporations, hence promoting regulatory capture.

The AI systems specifically referred to in under points 1 to 8a stand for critical use cases and are each considered to be high-risk AI systems pursuant to Article 6(2), provided that they fulfil the criteria set out in that Article: (Annex III)

While regulations for high-risk AI systems are necessary due to their impact on critical sectors, their inherent complexity may inadvertently favor larger tech companies due to the significant resources required for compliance.

In conclusion, while the EU AI Act has provisions aimed at preventing regulatory capture, it also includes potential risk factors. The stringent regulations could favor larger corporations capable of absorbing higher costs of compliance, potentially leading to regulatory capture. To ensure the Act supports fair competition, regulators must consider these aspects and offer more support for smaller entities.