What AI Act tells about humancentric principle of AI? What does this principle mean in practice?

Gist 1

All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles [\u2026] \u2018human agency and oversight\u2019 means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans (Article 4a)

This highlights one of the important humancentric principles in the EU AI Act. The ‘human agency and oversight’ principle means that AI systems should be designed to serve people, protecting human autonomy, and respecting human dignity. Also, the AI system should be overseen by humans and must be controllable by human operators to maintain the overall control and decision-making power.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. (Article 14)

This is an extension of the ‘human agency and oversight’ principle specified in Article 4a. It emphasizes that the AI systems should be designed sensibly with a proper human-machine interface to allow effective oversight by humans, especially for high-risk AI systems. It insists again on the importance of human involvement and control in the case of high-risk AI systems, basically to assess and navigate associated risks.

Natural persons in charge of ensuring human oversight shall have sufficient level of AI literacy… (Article 14)

This aspect of ‘human oversight’ requires that those who are overseeing the AI system should have sufficient AI literacy. This means they should have an understanding of how the system works, which is vital to effectively monitor and control the system. It also supports the goal of transparency and accountability in decision-making processes involving AI, forming a link to the ‘transparency’ principle in Article 4a.

The AI Office shall [\u2026] promote public awareness and understanding of the benefits, risks, safeguards and rights and obligations in relation to the use of AI systems; (Article 56b)

This point highlights that the AI Office, an oversight agency, has the task of promoting public awareness and understanding of AI. This is aligned with the idea of promoting a human-centric approach to AI, ensuring that people who use or are affected by AI systems are well-informed about their rights, obligations, and the potential risks and benefits. It also hints towards fostering a society-wide understanding of AI, which is crucial in maintaining human agency in an increasingly AI-driven world.

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. (Recital 4a)

The AI Act highlights the importance of aligning AI and its regulations with the core values of the EU, such as respect for human dignity, freedom, democracy, equality, and respect for human rights among others as set out in Article 2 of TEU. This alignment with Union values is seen as crucial for addressing the significant societal impact of AI and fostering public trust.

As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. (Recital 4a)

It emphasizes that AI should be developed as a human-centric technology, meaning it needs to maintain respect for human autonomy and freedom while serving societal needs and the common good. The intention is not to replace humans or curtail their freedoms but to augment their capabilities and contribute positively to society.

(b) a description of the architecture, design specifications, algorithms and the data structures including a decomposition of its components and interfaces, how they relate to one another and how they provide for the overall processing or logic of the AI system; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; (Annex IV Section 2(b))

This provision reflects the humancentric principle because it mandates transparency in the design choices of AI systems, including the assumptions made about specific groups of people. This means that AI systems will have to explain their logic and decisions for all target demographic, thus making sure the system is fair and inclusive.

(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers, in accordance with Articles 13(3)(d); (Annex IV Section 2(e))

This provision fits into the humancentric principle by mandating that AI system providers assess the need for human oversight. This means the AI doesn’t operate independently and unchecked; there is always a human in the loop, ensuring that the AI system doesn’t make harmful or inappropriate decisions.

In summary, the humancentric principle of AI according to the AI Act means designing and using AI systems in a way that respects human autonomy, dignity, and rights. It insists on human oversight over AI systems, promotes transparency, and aims to serve the society at large. The Act also includes the provision of safeguards, and promotes public awareness about AI, all aligned with a humancentric approach. In practice, this translates to AI systems that are fair, inclusive, understandable, and accountable—with human control and decision-making abilities at the forefront.

Gist 2

All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence. (Article 4a)

This passage suggests that under the EU AI Act, AI systems should be designed and implemented with a human-centric approach. It means that the primary focus of AI should be to serve human needs while ensuring the compliance with ethics and trustworthiness.

‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. (Article 4a)

Here, the Act clearly defines the principle of human agency and oversight, establishing that AI systems should be used as tools for humans, respect human rights and personal autonomy, and function in a way that human control and supervision is possible. Highlighting respect for human dignity and personal autonomy, the AI system should not interfere or override human decisions without explicit authorization.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. (Article 14)

This quote indicates that, for high-risk AI systems, the Act directs developers to ensure effective human oversight. It stresses on appropriate human-machine interface tools that enable comprehensive human control over complex AI systems, with the extent of human oversight corresponding to the risk associated with the system. Hence, higher risk systems should provide higher levels of human oversight.

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. (Recital 4a)

Elaborating on the human-centric approach, this quote states that AI systems must be designed and managed within the context of Union values, fundamental rights, and international human rights law. In essence, AI decision-making should respect human behavior, needs, and dignity, placing humans at the center of all considerations.

As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. (Recital 4a)

Further iterating the notion of human-centric AI, the text suggests that AI should be used as a tool to assist humans, not to replace them or undermine their freedom. Ensuring that AI serves societal needs and upholds the common good, human freedom and autonomy should never be compromised.

Safeguards should be provided to ensure the development and use of ethically embedded artificial intelligence that respects Union values and the Charter. (Recital 4a)

According to this text, appropriate safeguards should be implemented in both the development and use stages of AI systems, so that they remain ethical and respect the Union values and the Charter. These safeguards, may include protocols or boundaries, are crucial to prevent misuse and to assure respect for human rights, democracy, and the EU Charter of Fundamental Rights.

In conclusion, the Human-centric principle of AI ensures that AI systems are developed and used in a manner that primarily serves humans, respects their autonomy, and maintains human oversight. This principle embodies human values, fundamental rights, and international human rights law, providing efficient human-machine interaction while respecting human dignity, choices, and control. High-risk AI systems should ensure an increased extent of human oversight. The design of AI should respect the user’s perspective, reflecting a core aspect of the human-centric approach to AI. This approach aligns with the EU AI Act’s stance on developing ethical and trustworthy AI systems by employing safeguards that represent human rights and values.

Gist 3

All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded. (Article 4a)

This passage confirms the AI Act’s emphasis on a human-centric approach, which requires AI’s development and usage be ethical, trustworthy, and respectful of human rights, aligning with the values of the Union.

‘Human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans. (Article 4a)

“Human agency and oversight” reflects key aspects of the human-centric principle, highlighting that AI should serve humans and their interests, respect personal autonomy, and be under human control and oversight.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. (Article 14)

This article underlines that even high-risk AI systems must maintain human control and supervision, especially through the use of suitable human-machine interfaces. This ensures practical human-centric control in settings where it’s most needed.

The Commission and the AI Office shall incorporate these guiding principles in standardisation requests as well as recommendations consisting in technical guidance to assist providers and deployers on how to develop and use AI systems. (Article 56b)

Bringing the human-centric principles into practice, the Commission and the AI Office are charged with integrating them into standardization requests and technical guidelines, aiding providers and users to develop and use AI systems responsibly and ethically.

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. (Recital 4a)

This sentence reiterates the necessity of developing AI and its regulatory framework according to Union values, fundamental rights, and international human rights laws. This is a solid affirmance to the importance of a human-centric approach in AI’s development.

As a pre-requisite, artificial intelligence should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and should primarily serve the needs of the society and the common good. (Recital 4a)

This, in practice, means that AI ought to serve human needs and society’s benefit while respecting an individual’s freedom and autonomy.

A detailed description of the elements of the AI system and of the process for its development, including … the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used. (Annex IV, Section 2(b))

This requirement implies that AI should be designed keeping its users and their needs at its center, showcasing an implicit application of the human-centric principle in the AI Act.

Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to… the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose. (Annex IV, Section 3)

This section highlights the need for comprehensive information about the performance of AI systems, focusing on their accuracy for the individuals or groups they are intended to be used by. This component helps in implementing the human-centric principle, underlining the importance of transparency and accountability in AI systems’ performance.

In conclusion, the AI Act’s human-centric principle presses to keep human welfare, rights, and user experiences at the forefront of the AI system’s design and development. The Act implicitly encourages AI to be a tool serving humans, not replacing or threatening them, enhancing human capabilities and decision-making rather than subtracting from it.

Gist 4

All operators falling under this Regulation shall make their best efforts to develop and use AI systems or foundation models in accordance with the following general principles establishing a high-level framework that promotes a coherent human-centric European approach to ethical and trustworthy Artificial Intelligence, which is fully in line with the Charter as well as the values on which the Union is founded: (a) ‘human agency and oversight’ means that AI systems shall be developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans; (b) ‘technical robustness and safety’…; (c) ‘privacy and data governance’…; (d) ‘transparency’…; (e) ‘diversity, non-discrimination and fairness’…; (f) ‘social and environmental well-being’… (Article 4a)

The human-centric principle coined in the AI Act of the EU refers to a set of six principles highlighted in Article 4a, each forming an integral part to build a holistic view of the human-centric approach. The principle of ‘human agency and oversight’ emphasizes the role of AI as a tool serving humans, respecting human dignity, and personal autonomy. It suggests that the AI systems should operate in a manner that can be meaningfully monitored, controlled, and supervised by humans.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they be effectively overseen by natural persons as proportionate to the risks associated with those systems. (Article 14)

Article 14 expands on the principle, prescribing that high-risk AI systems should be designed and developed in a way that allows effective human oversight. The level of oversight depends on the level of risks presented by the AI system. Among other things, this means giving operators tools for understanding and controlling AI system’s operations.

Given the major impact that artificial intelligence can have on society and the need to build trust, it is vital for artificial intelligence and its regulatory framework to be developed according to Union values enshrined in Article 2 TEU, the fundamental rights and freedoms enshrined in the Treaties, the Charter, and international human rights law. (Recital 4a)

The AI Act asserts the importance of AI and its regulations to strictly align with Union values, acknowledging the significant implications AI could have on societal infrastructures. This implies not just adherence to technical observance, but the need for fundamental rights, Union values, and legal constructs to be factored into the structure of AI technologies.

As a pre-requisite, AI should be a human-centric technology. It should not substitute human autonomy or assume the loss of individual freedom and primarily serve the needs of the society and the common good. (Recital 4a)

The AI Act sets forth the ‘human-centric’ principle, emphasizing that AI systems should prioritize human welfare, autonomy, and societal well-being over ancillary considerations. In practice, this principle means AI developers should design systems aiming not to suppress or erode human freedom and decision-making capabilities for individuals or society. Instead, AI should meet societal demands and contribute to communal well-being.

Safeguards should be provided to ensure the development and use of ethically embedded AI that respects Union values and the Charter. (Recital 4a)

Here, the AI Act encourages safeguards to ensure that AI development is ethically embedded—meaning it adheres to social, moral, and legal norms. In practice, it implies AI systems should be designed, developed, and used in line with established ethical practices, respecting charter values. So, any contravention of these principles during AI’s implementation, deployment, or use would be contrary to the law’s intention.

After reviewing Annex IV, it does not contain explicit mention to the human-centric principles in AI systems. However, there are several parts that implicitly align with the human-centric approach, considering user-centered design, transparency, and protection of freedoms and rights.

(gb) a detailed and easily intelligible description of the system’s expected output and expected output quality;

(gc) detailed and easily intelligible instructions for interpreting the system’s output;

(gd) examples of scenarios for which the system should not be used; (Annex IV)

This part of Annex IV indicates AI systems should provide transparent and understandable information to the users. That includes the interpretation instructions and known limitations or scenarios where the system should not be used. This aligns with the human-centric principle by enabling users to interpret AI systems’ output and understand their limitations.

(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers, in accordance with Articles 13(3)(d); (Annex IV)

This clause emphasizes the need for human oversight and interpretation measures in AI systems. It embarks on the principle of ensuring that AI systems are used in a manner that respects human autonomy and decision-making.

  1. Detailed information about the monitoring, functioning, and control of the AI system, in particular, its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights, and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the deployers; specifications on input data, as appropriate; (Annex IV)

This instruction in the Annex suggests that the AI system should have processes to monitor, control, and report on the performance and functioning of the system. The requirement to report on foreseeable unintended outcomes and sources of risks to fundamental rights reaffirms the principle of human welfare and safety.

The human-centric principle of AI basically means that AI systems should empower and serve humans, preserving their agency and ensuring their well-being. This practice means AI systems should work in a manner that ensures transparency, accountability, and fairness, protecting human rights.

However, the specifics of how this principle is applied could vary widely depending on the context and the individual AI system. Practical implementation may involve things like user consent protocols before processing data, transparency measures explaining how AI decisions are made, non-discriminatory algorithms, opportunities for human oversight and intervention, and so forth.