Who is responsible for the potential harm caused by an AI tool?

Gist 1

Upon reviewing the EU AI Act in detail to determine who is responsible for potential harm caused by an AI tool, these are the key clarifications:

The definition of an ‘AI provider’ states that this term refers to a ‘natural or legal person, public authority, agency or other body that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, irrespective of whether this is done on a professional basis or not.’ (Article 3)

This suggests that the AI Provider, as the party responsible for developing and putting the AI system on the market, could bear responsibility in case the AI system causes harm.

The ‘user’ is defined as any ‘natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.’ (Article 3)

On the other hand, users could also be responsible for harm caused by the AI system, especially if the tool is being used under their jurisdiction. Users should therefore ensure that they are using the tool correctly and responsibly.

Article 28 lays out key responsibilities for each party along the AI value chain, stating: ‘Where an AI system is placed on the market or put into service by a person other than the provider, and without any modification, such person shall assume the obligations for providers as laid down in this Regulation.’

  • (Article 28)*

This clarifies that if a third party places the AI system on the market without making any modifications, they assume the same responsibilities as an AI provider. This includes obligations related to transparency and accurate representation of the AI system’s capabilities and potential risks.

Article 61 on post-market monitoring mentions that ‘The provider of a high-risk AI system shall establish, document, implement and maintain a post-market monitoring system commensurate with the nature of the AI system and its intended purpose, in a proportionate and risk-based manner.’

  • (Articles 61)*

The responsibility for monitoring high-risk AI systems post-market falls on the provider. Therefore, if a problem occurs after the tool has been put onto the market, it is expected that the provider will take action quickly to address the issue.

In conclusion, multiple parties - including the AI provider, users, and third parties that place the AI system on the market without modifications - could potentially be held responsible for harm caused by an AI tool, as evidenced by their obligations laid out in the EU AI Act. This liability is subject to the specifics of the harm caused and the role each party played in the use and deployment of the AI system.

Gist 2

The responsibility for potential harm caused by an AI tool or system, based on the EU AI Act, relies largely on the provider or operator of the AI system, with some oversight from national supervisory authorities.

”Any distributor, importer, deployer or other third-party shall be considered a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16 … ” (Article 28)

This provision from Article 28 indicates that the makers, importers, and distributors of such AI systems bear the responsibility. They would be subject to the obligations under Article 16 if the AI system they interact with is considered high risk.

”Providers of high-risk AI systems shall…ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service… ” (Article 16)

As further outlined in Article 16, the providers of high-risk AI systems should ensure the systems meet the requirements before their release to the market.

”Where providers falling under one or more of the critical areas and use cases referred to in Annex III consider that their AI system does not pose a significant risk…they shall submit a reasoned notification to the national supervisory authority… ” (Article 6, Paragraph 2a)

Article 6 suggests that providers are responsible for assessing potential risks and notifying the authorities accordingly.

”The degree of responsibility of the operator taking into account the technical and organisational measures implemented by them” (Article 71)

Article 71 suggests that the degree of responsibility may depend on the measures implemented to mitigate harm by the operator.

”Each Member State shall designate one national supervisory authority, which shall be organised so as to safeguard the objectivity and impartiality of its activities and tasks… ” (Article 59)

While the above points largely place the responsibility on providers and operators of AI systems, Article 59 highlights the role of national supervisory authorities in overseeing and enforcing the regulation.

It is important to note that the question of legal liability for harm caused, i.e., who compensates the party that suffered the harm, might be covered more directly by other parts of the AI Act not analyzed in this question.

Gist 3

Obligations of providers and deployers of high-risk AI systems and other parties Providers of high-risk AI systems shall: ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; have a quality management system in place which complies with Article 17; ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service, in accordance with Article 43; take the necessary corrective actions as referred to in Article 21 and provide information in that regard; upon a reasoned request of a national supervisory authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title. (Article 16)

This policy places a considerable amount of responsibility on providers of high-risk AI systems. It obligates them to ensure systems meet appropriate standards before entering the market, maintain quality control, conduct assessment procedures, and take corrective actions to resolve problems.

Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. (Article 29)

Here, responsibility is also assigned to deployers of high-risk systems. They should ensure the systems function in accordance with provided instructions, monitor the operation of the system, and if they suspect a risk, they are obligated to inform the providers, and presumably halt operations.

Where the national supervisory authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, it shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. Where, in the course of that evaluation, the national supervisory authority or, where relevant, the national public authority referred to in Article 64(3) finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe and in any event no later than fifteen working days or as provided for in the relevant Union harmonisation law as applicable. (Article 65)

In cases of harm or risk, national supervisory authorities play a central role. These regulatory bodies maintain the right to enforce corrective actions when AI systems fall short of compliance. They also bear the responsibility for assessing systems that might be viewed as presenting a risk.

A number of legally binding rules at European, national and international level already apply or are relevant to AI systems today, including but not limited to EU primary law, EU secondary law, the UN Human Rights treaties and the Council of Europe conventions, and national law. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. (Recital 41a)

Considering multiple possible sources of responsibility, existing laws and regulations at various levels (European, national, and international), as well as domain-specific rules, could determine who is responsible for harm caused by an AI tool. It would depend on the circumstances and other involved laws, such as EU secondary law (like GDPR or Product Liability Directive) or domain-specific rules (like the Medical Device Regulation for healthcare applications). Ultimately, the responsibility could potentially lie with anyone from the tool’s creator or distributor to the entity using it, depending on the specific harm and AI tool involved.

Gist 4

‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge; (Article 3)

The Article 3 of the AI Act defines a ‘provider’ as an entity that develops an AI system with the intention of placing it in the market or putting it into use under their name or trademark.

‘deployer’ means any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity; (Article 3)

In the same Article, a ‘deployer’ is defined as any person or entity that uses an AI system under their jurisdiction, except for personal, non-professional uses.

‘operator’ means the provider, the deployer, the authorised representative, the importer and the distributor; (Article 3)

This Article also introduces the term ‘operator’, which is a broad term encompassing provider, deployer, the authorized representative, the importer, and the distributor.

Providers of high-risk AI systems shall: (a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title before placing them on the market or putting them into service; (Article 16)

Article 16 specifically mandates providers of high-risk AI systems to ensure compliance with necessary requirements before introducing the system to the market or service.

AI systems presenting a risk shall be understood as an AI system having the potential to affect adversely health and safety, fundamental rights of persons in general, including in the workplace, protection of consumers, the environment, public security, or democracy or the rule of law and other public interests, that are protected by the applicable Union harmonisation law, to a degree which goes beyond that considered reasonable and acceptable in relation to its intended purpose or under the normal or reasonably foreseeable conditions of use of the system are concerned, including the duration of use and, where applicable, its putting into service, installation and maintenance requirements. (Article 65)

Continuing from this, according to Article 65, AI systems that pose a risk, including causing harm under reasonably foreseeable conditions, are implied to have a degree of responsibility associated with them. This imposes a necessity for appropriate use, installation, and maintenance.

From these Articles, it is clear that the ‘provider’, ‘deployer’, and ‘operator’ have a degree of responsibility if the AI system poses a risk or causes harm. However, the specific attribution of liability will depend on factors such as the nature of the harm, compliance with regulations, and the role of the involved entity.