What level of autonomy is required for a software to be viewed as AI system?

Gist 1

“‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;” (Article 3(1))

According to the provided text from Article 3(1) of the AI Act, an artificial intelligence system (AI system) is defined as a machine-based system geared towards operation with varying degrees of autonomy. Notably, the Act doesn’t specify a concrete level of autonomy for a software to be classified as an AI system. Rather, the AI system should be designed to operate with varying degrees of autonomy, enabling the generation of outputs like predictions, decisions, or recommendations that directly have an effect on physical or virtual environments.

”AI systems are designed to operate with varying levels of autonomy, meaning that they have at least some degree of independence of actions from human controls and of capabilities to operate without human intervention.” (Recital 6)

Recital 6 further supports the assertion from Article 3(1), explaining that AI systems are designed with a degree of independence that allows them to operate without human intervention. Again, there isn’t a specific level of autonomy defined for the software to be considered an AI system. The level of autonomous operation varies, with the requirement being some degree of independence from human controls.

”AI systems often have machine learning capacities that allow them to adapt and perform new tasks autonomously.” (Recital 6a)

Recital 6a underlines that AI systems often exhibit machine learning capabilities, allowing them to adapt and perform tasks autonomously. This indicates the system’s ability to adapt and operate independently, echoing the earlier point about autonomy not being tied to a specific degree, but rather related to the software’s learning capacities.

”If this larger system would not function without the AI component in question, then the entire larger system should be considered as one single AI system under this Regulation.” (Recital 6b)

Recital 6b conveys that the entire system should be considered as an AI system if it wouldn’t function without the AI component, suggesting that the software’s classification as an AI system doesn’t strictly depend on its own level of autonomy, but also the significance of its role within the larger assembly.

”the governance system the deployer will put in place, including human oversight, complaint-handling and redress.” (Article 29a)

Article 29a stipulates that deployers of high-risk AI systems need to establish a governance system consisting of human oversight. It doesn’t directly answer the level of autonomy needed for a system to be classified as AI, rather it emphasizes the importance of human governance where high-risk AI systems are deployed.

To sum up, the AI act does not state a specific level of autonomy required for a software to be classified as an AI system. It gives prominence to the capability of the system to operate independently, produce consequential outputs, and have a significant role, with the level of autonomy being variable. In cases of high-risk systems, human oversight forms a requisite part of their governance.

Gist 2

“‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.” (Article 3(1))

The definition of an AI system, as stated in Article 3(1), doesn’t specify a precise level of autonomy but emphasizes that AI systems are designed to operate with varying degrees of it. The level of autonomy might differ based on the objectives the system is designed to achieve.

The system’s ability to independently generate outputs like predictions, decisions, or recommendations that can influence physical or virtual environments could be a stronger factor for classification as an AI system, rather than its level of autonomy. The important characteristic here is the system’s capability to operate autonomously in real-time to produce outputs influencing the surroundings.

”AI systems are designed to operate with varying levels of autonomy, meaning that they have at lease some degree of independence of actions from human controls and of capabilities to operate without human intervention.” (Recital 6)

Recital 6 expands on the concept of autonomy within AI systems, suggesting that at least some level of independence from human control is required, including the ability to operate without human intervention. This could be a key distinguishing factor of AI systems from other, less complex software systems.

”AI systems often have machine learning capacities that allow them to adapt and perform new tasks autonomously.” (Recital 6a)

Recital 6a further elaborates on autonomy requirements, indicating that AI systems typically have machine learning capabilities, allowing them to adapt and autonomously perform new tasks. It underscores the system’s ability to self-optimize based on input data and independently generate outputs as crucial to its autonomy.

In summary, the level of autonomy for a software to be classified as an AI system under the EU AI Act can vary but should at least involve some degree of independence from human control, along with the capability of performing tasks autonomously, especially in real-time environments. These requirements are not quantitatively specified, and hence may be subject to interpretation and to the specifics of the software system in question.

Gist 3

‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;
*(Article 3(1))*

This definition makes it clear that any software to be classified as an AI system must be capable of operating with varying levels of autonomy. The key takeaway here is the term “varying levels,” which implies that the system does not necessarily need to be fully autonomous to be classified as AI. It could potentially have low to moderate levels of autonomy and still be considered as an AI system. This ability to operate autonomously to an extent should enable the system to generate outputs like predictions, recommendations, or decisions, which can influence physical or virtual environments. The influence of an AI system’s outputs is directly linked to its level of autonomy, which is synonymous with the level of independence the system has in making decisions.

The regulation avoids specifying an exact level of autonomy, allowing for a broad coverage of AI systems across the autonomy spectrum. This implies that even AI systems requiring substantial human intervention could potentially be deemed as AI systems, provided they meet other characteristics specified for an AI system. Therefore, the level of autonomy is not a fully defining attribute, but one aspect of an AI system. The regulation acknowledges the diversity and broad spectrum of AI by referring to “varying” levels of autonomy.

AI systems are designed to operate with varying levels of autonomy, meaning that they have at least some degree of independence of actions from human controls and of capabilities to operate without human intervention. (Recital 6)

From the above quotation, AI systems, as defined in the AI Act, have differing levels of autonomy, signifying that they may exhibit some level of independence from human control. These systems, to a certain extent, can function without human intervention. It isn’t explicitly established what the required level of autonomy is for a system to be considered as an AI system. This suggests that even a relatively low level of autonomy could qualify a system as AI, provided it also meets other defining characteristics.

The objectives of the AI system may be different from the intended purpose of the AI system in a specific context. The reference to predictions includes content, which is considered in this Regulation a form of prediction as one of the possible outputs produced by an AI system. (Recital 6)

This further reiterates the autonomy character of AI systems. According to the regulation, an AI system can operate according to its objectives, potentially differing from its initially intended operation in a specific context. This ability differs from adhering strictly to its initial programming and is another testament to an AI system’s autonomy. This capability of generating predictions (or other types of output) based on its objectives.

The review of Annex I didn’t result in additional substantial information related to the level of autonomy required for a software to be recognized as an AI system under the EU AI Act. This further underscores that the Act accepts a spectrum of AI systems with varying degrees of autonomy rather than establishing a specific threshold level. Thus, the level of autonomy required for a software to be viewed as an AI system is not specifically stipulated but can range from low to high, given the system meets the additional criteria outlined in the definition of an AI system.

Gist 4

“‘artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;” (Article 3(1))

The term ‘Artificial Intelligence system’ (AI system) is broad and encompasses any machine-based system designed to function with varying degrees of autonomy. This means that a piece of software does not require a specific level of autonomy to be categorized as an AI system. As long as it can operate with some degree of autonomy - low or high - and can generate outputs like predictions, recommendations, or decisions which can impact physical or virtual environments, it can be deemed as an AI system.

The regulation does not specify particular levels of autonomy, suggesting that any level of autonomy is considered sufficient as long as the system can function and produce the intended outputs. This indicates that the definition of an AI system under the proposed AI Act is broad and might cover a wide range of software applications, from simple rule-based systems to complex learning algorithms.

”AI systems often have machine learning capacities that allow them to adapt and perform new tasks autonomously.” (Recital 6a)

The above statement provides context about the nature of AI systems, suggesting that a system may be considered an AI if it has machine learning capabilities that allow it to perform tasks autonomously and adapt to new tasks. This indicates a level of autonomy where the system can modify its behavior or output without human intervention.

”Machine learning refers to the computational process of optimizing the parameters of a model from data, which is a mathematical construct generating an output based on input data. Machine learning approaches include, for instance, supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks.” (Recital 6a)

The recital provides a definition of machine learning, explaining that it is a computational process that optimizes a mathematical model from data to generate outputs based on inputs. It further specifies that machine learning approaches can include supervised, unsupervised, and reinforcement learning, as well as deep learning with neural networks. Thus, this could suggest that an AI system requires specific capabilities to apply these methods independently to be considered as possessing the necessary level of autonomy.

”This Regulation is aimed at addressing new potential risks that may arise by delegating control to AI systems, in particular to those AI systems that can evolve after deployment.” (Recital 6a)

This statement implies that AI systems with the capability to evolve after deployment, i.e., to learn and adapt from its interactions, is included in the view of AI systems. The ability of an AI system to evolve can thereby be associated with a certain level of autonomy, where the system can adjust its behavior or operation without being explicitly programmed to perform the specific task.

”The function and outputs of many of these AI systems are based on abstract mathematical relationships that are difficult for humans to understand, monitor and trace back to specific inputs.” (Recital 6a)

This implies that AI systems can operate in ways that are difficult for humans to comprehend, indicating a significant level of autonomy wherein the AI system can draw intricate conclusions or actions from input data that a human may not anticipate or understand.

”Comparably simpler techniques such as knowledge-based approaches, Bayesian estimation or decision-trees may also lead to legal gaps that need to be addressed by this Regulation, in particular when they are used in combination with machine learning approaches in hybrid systems.” (Recital 6a)

This statement suggests that even AI systems employing simpler techniques, when used with machine learning in hybrid systems, would need to be considered under regulation, thus extending the view of AI systems not just to high autonomy but also to systems possessing a certain degree of interpretability and predictability.

There seems to be no direct reference to the level of autonomy in Annex I. Hence, the analysis is based on Article 3(1) and Recital 6a only.

Given the provisions of the AI Act and considering the broader contexts provided by the act’s definitions and explanations, it seems the level of autonomy required for a software to be viewed as an AI system is not strictly quantified. Instead, it encompasses a range - from systems with simpler methodologies to highly complex machine learning models that operate with significant independence. Furthermore, the AI system’s ability to modify its behavior or actions without explicit programming, either by applying machine learning techniques or evolving after deployment, seems to underline a crucial aspect of autonomy as defined by the Act. Therefore, any software that exhibits a level of self-directed operation, especially in generating outputs such as predictions, recommendations, or decisions impacting physical or virtual environments, can likely be considered as having the necessary autonomy to be recognized as an AI system under the EU AI Act.