The Markets in Crypto-Assets Regulation (MiCA) provides a comprehensive framework for crypto-assets within the EU. Here’s a summary of the key points relevant to entrepreneurs:
Entrepreneurs venturing into crypto-assets under MiCA should be aware of the specific requirements and exclusions to ensure regulatory compliance.
’crypto-asset’ means a digital representation of a value or of a right that is able to be transferred and stored electronically using distributed ledger technology or similar technology;
Article 3, point 5→
MiCA defines crypto-assets broadly, addressing any digital representation of value or rights facilitated by advanced technologies such as distributed ledgers. This all-encompassing approach indicates the regulation’s aim to encompass a wide variety of digital initiatives within its scope.
‘asset-referenced token’ means a type of crypto-asset that is not an electronic money token and that purports to maintain a stable value by referencing another value or right or a combination thereof, including one or more official currencies; ‘electronic money token’ or ‘e-money token’ means a type of crypto-asset that purports to maintain a stable value by referencing the value of one official currency; ‘utility token’ means a type of crypto-asset that is only intended to provide access to a good or a service supplied by its issuer;
Article 3, points 6, 7, 9→
Distinct categories are carved out: asset-referenced tokens, e-money tokens, and utility tokens. Asset-referenced tokens seek stability by referencing various assets, while e-money tokens are inherently linked to singular fiat currencies. Utility tokens provide digital access to services specifically meant for the asset’s issuer.
This Regulation does not apply to:
- (a) persons who provide crypto-asset services exclusively for their parent companies, for their own subsidiaries or for other subsidiaries of their parent companies;
- (b) …
- (c) the ECB, central banks of the Member States when acting in their capacity as monetary authorities, or other public authorities of the Member States;
- (d) the European Investment Bank and its subsidiaries;
- …
- (f) public international organisations.
- (g) crypto-assets that are unique and not fungible with other crypto-assets.
Article 2, paragraphs 2 and 3→
MiCA’s reach excludes intra-group services, the actions of central banks and public authorities, international organizations, and unique, non-fungible crypto-assets. Thus, internal corporate use or sovereign financial activities, typical monetary operations and unique items like individual NFTs are not under regulatory concern.
A person shall not make an offer to the public, or seek the admission to trading, of an asset-referenced token, within the Union, unless that person is the issuer of that asset-referenced token and is authorised in accordance with Article 21 by the competent authority of its home Member State; or a credit institution that complies with Article 17. The authorisation granted by the competent authority to a person shall be valid for the entire Union and shall allow an issuer of an asset-referenced token to offer to the public, throughout the Union, the asset-referenced token for which it has been authorised, or to seek an admission to trading of such asset-referenced token.
Article 16, paragraphs 1 and 3→
Asset-referenced tokens under MiCA are subject to stringent requirements, including obtaining authorization and adherence to a range of operational, governance, and transparency standards, suggesting a targeted regulatory approach that recognizes their macroeconomic and systematic importance.
A person shall not make an offer to the public or seek the admission to trading of an e-money token, within the Union, unless that person is the issuer of such e-money token and… Issuers of e-money tokens shall issue e-money tokens at par value and on the receipt of funds. Issuers of e-money tokens shall not grant interest in relation to e-money tokens.
Article 48, paragraphs 1 and 3; Article 49, paragraph 3; Article 50, paragraph 1→
E-money tokens are similarly safeguarded through authorization requirements and mechanisms to maintain value parity with fiat currency, but distinctively, they may not offer interest returns. These rules align their usage closer to traditional money than to speculative assets, reflecting a considered integration into the financial system.
This Regulation does not apply to crypto-assets that qualify as one or more of the following: (a) financial instruments; (b) deposits, including structured deposits; (c) funds, except if they qualify as e-money tokens;
- (d) securitisation positions in the context of a securitisation as defined in Article 2, point (1), of Regulation (EU) 2017/2402;
- …
- (h) individual pension products for which a financial contribution from the employer is required by national law and where the employer or the employee has no choice as to the pension product or provider;
Article 2, paragraph 4→
Crypto-assets that are classified under existing financial regulations concerning instruments, deposits, structured products, and obligations are deliberately kept outside MiCA’s domain. This clear delineation ensures that products with existing regulatory structures are not redundantly or confusingly dual-regulated.
‘financial instrument’ means financial instruments as defined in Article 4(1), point (15), of Directive 2014/65/EU;
Article 3(49)→
Reinforcing this separation, MiCA defers to existing frameworks like MiFID II to govern crypto-assets that qualify as financial instruments, emphasizing the focus on novel digital asset types not already encapsulated by legacy financial regulations.
This Title shall not apply to offers to the public of crypto-assets other than asset-referenced tokens or e-money tokens where any of the following apply: …the crypto-asset is offered for free; …the holder of the crypto-asset has the right to use it only in exchange for goods and services in a limited network of merchants with contractual arrangements with the offeror.
Article 4, Paragraph 3→
The non-fungible crypto-assets and certain utility-type assets, which have specific use constraints or are non-monetary rewards, are acknowledged with tailored provisions, suggesting a lighter regulatory touch that considers their unique roles and features.
In summary, MiCA endeavors to foster a regulated crypto-asset market, emphasizing asset-referenced tokens and e-money tokens due to their resemblance to financial instruments and potentials for mass adoption. It purposefully excludes assets already covered under established financial rulesets, as well as entities conducting activities that do not implicate broader market health or consumer protection concerns. Moreover, unique and specific-use assets like NFTs receive attention to ensure the regulation does not stifle innovation and remains proportionate to the risks posed.
We have searched through the PDF repository of draft EBA and ESMA guidelines, draft technical standards, and other documents to provide this supplemental answer.
In response to your query regarding the types of crypto-assets covered under the Markets in Crypto-Assets Regulation (MiCA), we have prepared a supplemental answer intended to augment the initial analysis with additional insights and context. This further elaboration will focus on expanding your understanding of how crypto-assets, including NFTs and utility tokens, are classified under both MiCA and the broader framework of financial regulations within the EU.
The goal of these guidelines is to provide NCAs and market participants with structured yet flexible conditions and criteria to determine whether a crypto-asset can be classified as a financial instrument.
(Draft) Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments, page 8→
This statement underlines the European Securities and Markets Authority’s (ESMA) intention to offer a clear, yet adaptable, set of guidelines for classifying certain types of crypto-assets as financial instruments under MiFID II. It accentuates the ongoing efforts to ensure that emerging digital assets fit within the existing regulatory framework, grounding our understanding of how these assets are viewed in relation to more traditional financial instruments.
Crypto-assets might be recognised as transferable securities if they grant rights similar to shares, bonds or other securities (e.g. securities embedding a derivative).
(Draft) Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments, page 9→
This clarification is crucial as it highlights the criteria under which crypto-assets may fall under the regulatory scope of MiFID II, serving as transferable securities. Particularly, it sheds light on how the rights conferred by certain crypto-assets, mirroring those of conventional securities, can influence their regulatory classification. This understanding is paramount for delineating the boundaries of MiCA’s applicability.
The definition of crypto-assets in MiCA is broadly defined capturing not only “cryptocurrencies”, such as Bitcoin or Ethereum, but also “stablecoins” and so-called utility tokens.
(Draft) Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments, page 18→
This insight broadens the spectrum of digital assets under MiCA, indicating its regulatory embrace extends beyond cryptocurrencies to include stablecoins and utility tokens. The delineation enables a deeper comprehension of MiCA’s scope, signifying its intent to create a secure and comprehensive regulatory environment for a variety of crypto-asset classes.
MiCA does not apply to crypto-assets that are unique and not fungible with other crypto-assets.
(Draft) Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments, page 21→
This statement directly informs on the regulatory stance towards Non-Fungible Tokens (NFTs) within the MiCA framework, illustrating a deliberate exemption based on their unique and non-fungible characteristics. Understanding this exception is critical for stakeholders exploring NFT initiatives, emphasizing the current regulatory perspectives on these distinct assets.
Utility tokens serve a specific utility/usage or provide some consumption rights. The rights granted by a utility token may thus vary according to the different business models implemented by DLT projects.
(Draft) Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments, page 20→
This elucidation on utility tokens reinforces their definition under MiCA, noting their purpose in offering access to services or consumption rights, which can differ widely across the distributed ledger technology (DLT) landscape. Highlighting the flexibility and diversity of utility tokens’ applications aids in understanding their regulatory treatment and innovation potential under the current framework.
Through these supplemental insights, our goal was to enrich your understanding of how crypto-assets are distinguished and regulated under EU financial directives, notably MiFID II and MiCA. By examining the classification of crypto-assets, including NFTs and utility tokens, within this defined regulatory context, we aim to provide you with a deeper grasp of the legal landscape impacting crypto-asset activities. This extended analysis should serve as a valuable resource for navigating the complexities of crypto-asset regulation and aligning your strategies with legal requirements effectively.
2021/0106 (COD)
Proposal for a
REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Economic and Social Committee 31 ,
Having regard to the opinion of the Committee of the Regions 32 ,
Acting in accordance with the ordinary legislative procedure,
Whereas:
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems. A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.
A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law. To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council 33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament 34 .
The notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition should be based on the key functional characteristics of the software, in particular the ability, for a given set of human-defined objectives, to generate outputs such as content, predictions, recommendations, or decisions which influence the environment with which the system interacts, be it in a physical or digital dimension. AI systems can be designed to operate with varying levels of autonomy and be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically integrated into the product (embedded) or serve the functionality of the product without being integrated therein (non-embedded). The definition of AI system should be complemented by a list of specific techniques and approaches used for its development, which should be kept up-to—date in the light of market and technological developments through the adoption of delegated acts by the Commission to amend that list.
The notion of biometric data used in this Regulation is in line with and should be interpreted consistently with the notion of biometric data as defined in Article 4(14) of Regulation (EU) 2016/679 of the European Parliament and of the Council 35 , Article 3(18) of Regulation (EU) 2018/1725 of the European Parliament and of the Council 36 and Article 3(13) of Directive (EU) 2016/680 of the European Parliament and of the Council 37 .
The notion of remote biometric identification system as used in this Regulation should be defined functionally, as an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge whether the targeted person will be present and can be identified, irrespectively of the particular technology, processes or types of biometric data used. Considering their different characteristics and manners in which they are used, as well as the different risks involved, a distinction should be made between ‘real-time’ and ‘post’ remote biometric identification systems. In the case of ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems in question by providing for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-‘live’ material, such as video footage, generated by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data have already been captured and the comparison and identification occur only after a significant delay. This involves material, such as pictures or video footage generated byclosed circuit television cameras or private devices, which has been generated before the use of the system in respect of the natural persons concerned.
For the purposes of this Regulation the notion of publicly accessible space should be understood as referring to any physical place that is accessible to the public, irrespective of whether the place in question is privately or publicly owned. Therefore, the notion does not cover places that are private in nature and normally not freely accessible for third parties, including law enforcement authorities, unless those parties have been specifically invited or authorised, such as homes, private clubs, offices, warehouses and factories. Online spaces are not covered either, as they are not physical spaces. However, the mere fact that certain conditions for accessing a particular space may apply, such as admission tickets or age restrictions, does not mean that the space is not publicly accessible within the meaning of this Regulation. Consequently, in addition to public spaces such as streets, relevant parts of government buildings and most transport infrastructure, spaces such as cinemas, theatres, shops and shopping centres are normally also publicly accessible. Whether a given space is accessible to the public should however be determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.
In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory manner, irrespective of whether they are established within the Union or in a third country, and to users of AI systems established within the Union.
In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are neither placed on the market, nor put into service, nor used in the Union. This is the case for example of an operator established in the Union that contracts certain services to an operator established outside the Union in relation to an activity to be performed by an AI system that would qualify as high-risk and whose effects impact natural persons located in the Union. In those circumstances, the AI system used by the operator outside the Union could process data lawfully collected in and transferred from the Union, and provide to the contracting operator in the Union the output of that AI system resulting from that processing, without that AI system being placed on the market, put into service or used in the Union. To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union. Nonetheless, to take into account existing arrangements and special needs for cooperation with foreign partners with whom information and evidence is exchanged, this Regulation should not apply to public authorities of a third country and international organisations when acting in the framework of international agreements concluded at national or European level for law enforcement and judicial cooperation with the Union or with its Member States. Such agreements have been concluded bilaterally between Member States and third countries or between the European Union, Europol and other EU agencies and third countries and international organisations.
This Regulation should also apply to Union institutions, offices, bodies and agencies when acting as a provider or user of an AI system. AI systems exclusively developed or used for military purposes should be excluded from the scope of this Regulation where that use falls under the exclusive remit of the Common Foreign and Security Policy regulated under Title V of the Treaty on the European Union (TEU).This Regulation should be without prejudice to the provisions regarding the liability of intermediary service providers set out in Directive 2000/31/EC of the European Parliament and of the Council [as amended by the Digital Services Act].
In order to ensure a consistent and high level of protection of public interests as regards health, safety and fundamental rights, common normative standards for all high-risk AI systems should be established. Those standards should be consistent with the Charter of fundamental rights of the European Union (the Charter) and should be non-discriminatory and in line with the Union’s international trade commitments.
In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based approach should be followed. That approach should tailor the type and content of such rules to the intensity and scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.
Aside from the many beneficial uses of artificial intelligence, that technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.
The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognised ethical standards for scientific research.
AI systems providing social scoring of natural persons for general purpose by public authorities or on their behalf may lead to discriminatory outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the values of equality and justice. Such AI systems evaluate or classify the trustworthiness of natural persons based on their social behaviour in multiple contexts or known or predicted personal or personality characteristics. The social score obtained from such AI systems may lead to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that is disproportionate or unjustified to the gravity of their social behaviour.Such AI systems should be therefore prohibited.
The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement is considered particularly intrusive in the rights and freedoms of the concerned persons, to the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. In addition, the immediacy of the impact and the limited opportunities for further checks or correctionsin relation to the use of such systems operating in ‘real-time’ carry heightened risks for the rights and freedoms of the persons that are concerned by law enforcement activities.
The use of those systems for the purpose of law enforcement should therefore be prohibited, except in three exhaustively listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks. Those situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack; and the detection, localisation, identification or prosecution of perpetrators or suspects of the criminal offences referred to in Council Framework Decision 2002/584/JHA 38 if those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years and as they are defined in the law of that Member State. Such threshold for the custodial sentence or detention order in accordance with national law contributes to ensure that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification systems. Moreover, of the 32 criminal offences listed in the Council Framework Decision 2002/584/JHA, some are in practice likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric identification will foreseeably be necessary and proportionate to highly varying degrees for the practical pursuit of the detection, localisation, identification or prosecution of a perpetrator or suspect of the different criminal offences listed and having regard to the likely differences inthe seriousness, probability and scale of the harm or possible negative consequences.
In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to establish that, in each of those three exhaustively listed and narrowly defined situations, certain elements should be taken into account, in particular as regards the nature of the situation giving rise to the request and the consequences of the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement should be subject to appropriate limits in time and space, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator.The reference database of persons should be appropriate for each use case in each of the three situations mentioned above.
Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent administrative authority of a Member State. Such authorisation should in principle be obtained prior to the use, except in duly justified situations of urgency, that is, situations where the need to use the systems in question is such as to make it effectively and objectively impossible to obtain an authorisation before commencing the use. In such situations of urgency, the use should be restricted to the absolute minimum necessary and be subject to appropriate safeguards and conditions, as determined in national law and specified in the context of each individual urgent use case by the law enforcement authority itself. In addition, the law enforcement authority should in such situations seek to obtain an authorisation as soon as possible, whilst providing the reasons for not having been able to request it earlier.
Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the Member State in question has decided to expressly provide for the possibility to authorise such use in its detailed rules of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use identified in this Regulation.
The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. Therefore, such use and processing should only be possible in as far as it is compatible with the framework set by this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed in Article 10 of Directive (EU) 2016/680. In this context, this Regulation is not intended to provide the legal basis for the processing of personal data under Article 8 of Directive 2016/680. However, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by competent authorities, should not be covered by the specific framework regarding such use for the purpose of law enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be subject to the requirement of an authorisation under this Regulationand the applicable detailed rules of national law that may give effect to it.
Any processing of biometric data and other personal data involved in the use of AI systems for biometric identification, other than in connection to the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement as regulated by this Regulation, including where those systems are used by competent authorities in publicly accessible spaces for other purposes than law enforcement, should continue to comply with all requirements resulting from Article 9(1) of Regulation (EU) 2016/679, Article 10(1) of Regulation (EU) 2018/1725 and Article 10 of Directive (EU) 2016/680, as applicable.
In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the rules governing the forms of judicial cooperation in criminal matters or police cooperation which require compliance with the provisions laid down on the basis of Article 16 of the TFEU.
In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and TFEU, Denmark is not bound by rules laid down in Article 5(1), point (d), (2) and (3) of this Regulation adopted on the basis of Article 16 of the TFEU, or subject to their application, which relate to the processing of personal data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU.
High-risk AI systems should only be placed on the Union market or put into service if they comply with certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests as recognised and protected by Union law. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such limitation minimises any potential restriction to international trade, if any.
AI systems could produce adverse outcomes to health and safety of persons, in particular when such systems operate as components of products. Consistently with the objectives of Union harmonisation legislation to facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant products find their way into the market, it is important that the safety risks that may be generated by a product as a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be able to safely operate and performs their functions in complex environments. Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include the right to human dignity, respect for private and family life, protection of personal data, freedom of expression and information, freedom of assembly and of association, and non-discrimination, consumer protection, workers’ rights,rights of persons with disabilities, right to an effective remedy and to a fair trial, right of defence and the presumption of innocence, right to good administration. In addition to those rights, it is important to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and implemented in Union policies should also be considered when assessing the severity of the harm that an AI system can cause, including in relation to the health and safety of persons.
As regards high-risk AI systems that are safety components of products or systems, or which are themselves products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the Council 39 , Regulation (EU) No 167/2013of the European Parliament and of the Council 40 , Regulation (EU) No 168/2013of the European Parliament and of the Council 41 , Directive 2014/90/EUof the European Parliament and of the Council 42 , Directive (EU) 2016/797of the European Parliament and of the Council 43 , Regulation (EU) 2018/858of the European Parliament and of the Council 44 , Regulation (EU) 2018/1139 of the European Parliament and of the Council 45 , and Regulation (EU) 2019/2144of the European Parliament and of the Council 46 , it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for high-risk AI systems laid down in this Regulation when adopting any relevant future delegated or implementing acts on the basis of those acts.
As regards AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonisation legislation, it is appropriate to classify them as high-risk under this Regulation if the product in question undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations,appliances burning gaseous fuels, medical devices, and in vitro diagnostic medical devices.
The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the product whose safety component is the AI system, or the AI system itself as a product, is considered ‘high-risk’ under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is notably the case for Regulation (EU) 2017/745 of the European Parliament and of the Council 47 and Regulation (EU) 2017/746 of the European Parliament and of the Council 48 , where a third-party conformity assessment is provided for medium-risk and high-risk products.
As regards stand-alone AI systems, meaning high-risk AI systems other than those that are safety components of products, or which are themselves products, it is appropriate to classify them as high-risk if, in the light of their intended purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into account both the severity of the possible harm and its probability of occurrence and they are used in a number of specifically pre-defined areas specified in the Regulation. The identification of those systems is based on the same methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems.
Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail discriminatory effects. This is particularly relevant when it comes to age, ethnicity, sex or disabilities. Therefore, ‘real-time’ and ‘post’ remote biometric identification systems should be classified as high-risk. In view of the risks that they pose, both types of remote biometric identification systems should be subject to specific requirements on logging capabilities and human oversight.
As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to appreciable disruptions in the ordinary conduct of social and economic activities.
AI systems used in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions or to evaluate persons on tests as part of or as a precondition for their education should be considered high-risk, since they may determine the educational and professional course of a person’s life and therefore affect their ability to secure their livelihood. When improperly designed and used, such systems may violate the right to education and training as well as the right not to be discriminated against and perpetuate historical patterns of discrimination.
AI systems used in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships, should also be classified as high-risk, since those systems may appreciably impact future career prospects and livelihoods of these persons. Relevant work-related contractual relationships should involve employees and persons providing services through platforms as referred to in the Commission Work Programme 2021. Such persons should in principle not be considered users within the meaning of this Regulation. Throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.
Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain essential private and public services and benefits necessary for people to fully participate in society or to improve one’s standard of living. In particular, AI systems used to evaluate the credit score or creditworthiness of natural persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources or essential services such as housing, electricity, and telecommunication services. AI systems used for this purpose may lead to discrimination of persons or groups and perpetuate historical patterns of discrimination, for example based on racial or ethnic origins, disabilities, age, sexual orientation, or create new forms of discriminatory impacts. Considering the very limited scale of the impact and the available alternatives on the market, it is appropriate to exempt AI systems for the purpose of creditworthiness assessment and credit scoring when put into service by small-scale providers for their own use. Natural persons applying for or receiving public assistance benefits and services from public authorities are typically dependent on those benefits and services and in a vulnerable position in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services should be denied, reduced, revoked or reclaimed by authorities, they may have a significant impact on persons’ livelihood and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity or an effective remedy. Those systems should therefore be classified as high-risk. Nonetheless, this Regulation should not hamper the development and use of innovative approaches in the public administration, which would stand to benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to legal and natural persons. Finally, AI systems used to dispatch or establish priority in the dispatching of emergency first response services should also be classified as high-risk since they make decisions in very critical situations for the life and health of persons and their property.
Actions by law enforcement authorities involving certain uses of AI systems are characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In particular, if the AI system is not trained with high quality data, does not meet adequate requirements in terms of its accuracy or robustness, or is not properly designed and tested before being put on the market or otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented.It is therefore appropriate to classify as high-risk a number of AI systems intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the nature of the activities in question and the risks relating thereto, those high-risk AI systems should include in particular AI systems intended to be used by law enforcement authorities for individual risk assessments, polygraphs and similar tools or to detect the emotional state of natural person, to detect ‘deep fakes’, for the evaluation of the reliability of evidence in criminal proceedings, for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons, or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups, for profiling in the course of detection, investigation or prosecution of criminal offences, as well as for crime analytics regarding natural persons. AI systems specifically intended to be used for administrative proceedings by tax and customs authorities should not be considered high-risk AI systems used by law enforcement authorities for the purposes of prevention, detection, investigation and prosecution of criminal offences.
AI systems used in migration, asylum and border control management affect people who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee the respect of the fundamental rights of the affected persons, notably their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk AI systems intended to be used by the competent public authorities charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools or to detect the emotional state of a natural person; for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum;for verifying the authenticity of the relevant documents of natural persons; for assisting competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status.AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Directive 2013/32/EU of the European Parliament and of the Council 49 , the Regulation (EC) No 810/2009 of the European Parliament and of the Council 50 and other relevant legislation.
Certain AI systemsintended for the administration of justice and democratic processesshould be classified as high-risk, considering their potentially significant impact on democracy, rule of law, individual freedoms as well as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and opacity, it is appropriate to qualify as high-risk AI systems intended to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set of facts. Such qualification should not extend, however, to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice in individual cases, such as anonymisation or pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks or allocation of resources.
The fact that an AI system is classified as high risk under this Regulation should not be interpreted as indicating that the use of the system is necessarily lawful under other acts of Union law or under national law compatible with Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law. This Regulation should not be understood as providing for the legal ground for processing of personal data, including special categories of personal data, where relevant.
To mitigate the risks from high-risk AI systems placed or otherwise put into service on the Union market for users and affected persons, certain mandatory requirements should apply, taking into account the intended purpose of the use of the system and according to the risk management system to be established by the provider.
Requirements should apply to high-risk AI systems as regards the quality of data sets used, technical documentation and record-keeping, transparency and the provision of information to users, human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively mitigate the risks for health, safety and fundamental rights, as applicable in the light of the intended purpose of the system, and no other less trade restrictive measures are reasonably available, thus avoiding unjustified restrictions to trade.
High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system. They should also have the appropriate statistical properties, including as regards the persons or groups of persons on which the high-risk AI system is intended to be used. In particular, training, validation and testing data sets should take into account, to the extent required in the light of their intended purpose, the features, characteristics or elements that are particular to the specific geographical, behavioural or functional setting or context within which the AI system is intended to be used. In order to protect the right of others from the discrimination that might result from the bias in AI systems, the providers shouldbe able to process also special categories of personal data, as a matter of substantial public interest, in order to ensure the bias monitoring, detection and correction in relation to high-risk AI systems.
For the development of high-risk AI systems, certain actors, such as providers, notified bodies and other relevant entities, such as digital innovation hubs, testing experimentation facilities and researchers, should be able to access and use high quality datasets within their respective fields of activities which are related to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing between businesses and with government in the public interest will be instrumental to provide trustful, accountable and non-discriminatory access to high quality data for the training, validation and testing of AI systems. For example, in health, the European health data space will facilitate non-discriminatory access to health data and the training of artificial intelligence algorithms on those datasets, in a privacy-preserving, secure, timely, transparent and trustworthy manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, providing or supporting the access to data may also support the provision of high-quality data for the training, validation and testing of AI systems.
Having information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation. This requires keeping records and the availability of a technical documentation, containing information which is necessary to assess the compliance of the AI system with the relevant requirements. Such information should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, testing and validation processes used as well as documentation on the relevant risk management system. The technical documentation should be kept up to date.
To address the opacity that may make certain AI systems incomprehensible to or too complex for natural persons, a certain degree of transparency should be required for high-risk AI systems. Users should be able to interpret the system output and use it appropriately. High-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate.
High-risk AI systems should be designed and developed in such a way that natural persons can oversee their functioning. For this purpose, appropriate human oversight measures should be identified by the provider of the system before its placing on the market or putting into service. In particular, where appropriate, such measures should guarantee that the system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary competence, training and authority to carry out that role.
High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art. The level of accuracy and accuracy metrics should be communicated to the users.
The technical robustness is a key requirement for high-risk AI systems. They should be resilient against risks connected to the limitations of the system (e.g. errors, faults, inconsistencies, unexpected situations) as well as against malicious actions that may compromise the security of the AI system and result in harmful or otherwise undesirable behaviour. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.
Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data poisoning) or trained models (e.g. adversarial attacks), or exploit vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, suitable measures should therefore be taken by the providers of high-risk AI systems, also taking into account as appropriate the underlying ICT infrastructure.
As part of Union harmonisation legislation, rules applicable to the placing on the market, putting into service and use of high-risk AI systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the Council 51 setting out the requirements for accreditation and the market surveillance of products, Decision No 768/2008/EC of the European Parliament and of the Council 52 on a common framework for the marketing of products and Regulation (EU) 2019/1020 of the European Parliament and of the Council 53 on market surveillance and compliance of products (‘New Legislative Framework for the marketing of products’).
It is appropriate that a specific natural or legal person, defined as the provider, takes the responsibility for the placing on the market or putting into service of a high-risk AI system, regardless of whether that natural or legal person is the person who designed or developed the system.
The provider should establish a sound quality management system, ensure the accomplishment of the required conformity assessment procedure, draw up the relevant documentation and establish a robust post-market monitoring system. Public authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the quality management system as part of the quality management system adopted at a national or regional level, as appropriate, taking into account the specificities of the sector and the competences and organisation of the public authority in question.
Where a high-risk AI system that is a safety component of a product which is covered by a relevant New Legislative Framework sectorial legislation is not placed on the market or put into service independently from the product, the manufacturer of the final product as defined under the relevant New Legislative Framework legislation should comply with the obligations of the provider established in this Regulation and notably ensure that the AI system embedded in the final product complies with the requirements of this Regulation.
To enable enforcement of this Regulation and create a level-playing field for operators, and taking into account the different forms of making available of digital products, it is important to ensure that, under all circumstances, a person established in the Union can provide authorities with all the necessary information on the compliance of an AI system. Therefore, prior to making their AI systems available in the Union, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative established in the Union.
In line with New Legislative Framework principles, specific obligations for relevant economic operators, such as importers and distributors, should be set to ensure legal certainty and facilitate regulatory compliance by those relevant operators.
Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, including as regard the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it is appropriate to set specific responsibilities for users. Users should in particular use high-risk AI systems in accordance with the instructions of use and certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and with regard to record-keeping, as appropriate.
It is appropriate to envisage that the user of the AI system should be the natural or legal person, public authority, agency or other body under whose authority the AI system is operated except where the use is made in the course of a personal non-professional activity.
In the light of the complexity of the artificial intelligence value chain, relevant third parties, notably the ones involved in the sale and the supply of software, software tools and components, pre-trained models and data, or providers of network services, should cooperate, as appropriate, with providers and users to enable their compliance with the obligations under this Regulation and with competent authorities established under this Regulation.
Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this Regulation. Compliance with harmonised standards as defined in Regulation (EU) No 1025/2012 of the European Parliament and of the Council 54 should be a means for providers to demonstrate conformity with the requirements of this Regulation. However, the Commission could adopt common technical specifications in areas where no harmonised standards exist or where they are insufficient.
In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to a conformity assessment prior to their placing on the market or putting into service.
It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk AI systems related to products which are covered by existing Union harmonisation legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation.
Given the more extensive experience of professional pre-market certifiers in the field of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI systems other than those related to products. Therefore, the conformity assessment of such systems should be carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems intended to be used for the remote biometric identification of persons, for which the involvement of a notified body in the conformity assessment should be foreseen, to the extent they are not prohibited.
In order to carry out third-party conformity assessment for AI systems intended to be used for the remote biometric identification of persons, notified bodies should be designated under this Regulation by the national competent authorities, provided they are compliant with a set of requirements, notably on independence, competence and absence of conflicts of interests.
In line with the commonly established notion of substantial modification for products regulated by Union harmonisation legislation, it is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes. In addition, as regards AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out), it is necessary to provide rules establishing that changes to the algorithm and its performance that have been pre-determined by the provider and assessed at the moment of the conformity assessment should not constitute a substantial modification.
High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.
Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of persons and for society as a whole. It is thus appropriate that under exceptional reasons of public security or protection of life and health of natural persons and the protection of industrial and commercial property, Member States could authorise the placing on the market or putting into service of AI systems which have not undergone a conformity assessment.
In order to facilitate the work of the Commission and the Member States in the artificial intelligence field as well as to increase the transparency towards the public, providers of high-risk AI systems other than those related to products falling within the scope of relevant existing Union harmonisation legislation, should be required to register their high-risk AI system in a EU database, to be established and managed by the Commission. The Commission should be the controller of that database, in accordance with Regulation (EU) 2018/1725 of the European Parliament and of the Council 55 . In order to ensure the full functionality of the database, when deployed, the procedure for setting the database should include the elaboration of functional specifications by the Commission and an independent audit report.
Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems. In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, natural persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system. Such information and notifications should be provided in accessible formats for persons with disabilities. Further, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
Artificial intelligence is a rapidly developing family of technologies that requires novel forms of regulatory oversight and a safe space for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk mitigation measures. To ensure a legal framework that is innovation-friendly, future-proof and resilient to disruption, national competent authorities from one or more Member States should be encouraged to establish artificial intelligence regulatory sandboxes to facilitate the development and testing of innovative AI systems under strict regulatory oversight before these systems are placed on the market or otherwise put into service.
The objectives of the regulatory sandboxes should be to foster AI innovation by establishing a controlled experimentation and testing environment in the development and pre-marketing phase with a view to ensuring compliance of the innovative AI systems with this Regulation and other relevant Union and Member States legislation; to enhance legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks and the impacts of AI use, and to accelerate access to markets, including by removing barriers for small and medium enterprises (SMEs) and start-ups. To ensure uniform implementation across the Union and economies of scale, it is appropriate to establish common rules for the regulatory sandboxes’ implementation and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes. This Regulation should provide the legal basis for the use of personal data collected for other purposes for developing certain AI systems in the public interest within the AI regulatory sandbox, in line with Article 6(4) of Regulation (EU) 2016/679, and Article 6 of Regulation (EU) 2018/1725, and without prejudice to Article 4(2) of Directive (EU) 2016/680. Participants in the sandbox should ensure appropriate safeguards and cooperate with the competent authorities, including by following their guidance and acting expeditiously and in good faith to mitigate any high-risks to safety and fundamental rights that may arise during the development and experimentation in the sandbox. The conduct of the participants in the sandbox should be taken into account when competent authorities decide whether to impose an administrative fine under Article 83(2) of Regulation 2016/679 and Article 57 of Directive 2016/680.
In order to promote and protect innovation, it is important that the interests of small-scale providers and users of AI systems are taken into particular account. To this objective, Member States should develop initiatives, which are targeted at those operators, including on awareness raising and information communication. Moreover, the specific interests and needs of small-scale providers shall be taken into account when Notified Bodies set conformity assessment fees. Translation costs related to mandatory documentation and communication with authorities may constitute a significant cost for providers and other operators, notably those of a smaller scale. Member States should possibly ensure that one of the languages determined and accepted by them for relevant providers’ documentation and for communication with operators is one which is broadly understood by the largest possible number of cross-border users.
In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as well as to facilitate compliance of providers and notified bodies with their obligations under this Regulation, the AI-on demand platform, the European Digital Innovation Hubs and the Testing and Experimentation Facilities established by the Commission and the Member States at national or EU level should possibly contribute to the implementation of this Regulation. Within their respective mission and fields of competence, they may provide in particular technical and scientific support to providers and notified bodies.
It is appropriate that the Commission facilitates, to the extent possible, access to Testing and Experimentation Facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that Union harmonisation legislation. This is notably the case for expert panels, expert laboratories and reference laboratories in the field of medical devices pursuant to Regulation (EU) 2017/745 and Regulation (EU) 2017/746.
In order to facilitate a smooth, effective and harmonised implementation of this Regulation a European Artificial Intelligence Board should be established. The Board should be responsible for a number of advisory tasks, including issuing opinions, recommendations, advice or guidance on matters related to the implementation of this Regulation, including on technical specifications or existing standards regarding the requirements established in this Regulation and providing advice to and assisting the Commission on specific questions related to artificial intelligence.
Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities for the purpose of supervising the application and implementation of this Regulation. In order to increase organisation efficiency on the side of Member States and to set an official point of contact vis-à-vis the public and other counterparts at Member State and Union levels, in each Member State one national authority should be designated as national supervisory authority.
In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk AI systems for improving their systems and the design and development process or can take any possible corrective action in a timely manner, all providers should have a post-market monitoring system in place. This system is also key to ensure that the possible risks emerging from AI systems which continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. In this context, providers should also be required to have a system in place to report to the relevant authorities any serious incidents or any breaches to national and Union law protecting fundamental rights resulting from the use of their AI systems.
In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products established by Regulation (EU) 2019/1020 should apply in its entirety. Where necessary for their mandate, national public authorities or bodies, which supervise the application of Union law protecting fundamental rights, including equality bodies, should also have access to any documentation created under this Regulation.
Union legislation on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation and relevant rules and requirements of the Union financial services legislation, the authorities responsible for the supervision and enforcement of the financial services legislation,including where applicable the European Central Bank, should be designated as competent authorities for the purpose of supervising the implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or used by regulated and supervised financial institutions. To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated under Directive 2013/36/EU of the European Parliament and of the Council 56 , it is also appropriate to integrate the conformity assessment procedure and some of the providers’ procedural obligations in relation to risk management, post marketing monitoring and documentation into the existing obligations and procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in relation to the quality management system of providers and the monitoring obligation placed on users of high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU.
The development of AI systems other than high-risk AI systems in accordance with the requirements of this Regulation may lead to a larger uptake of trustworthy artificial intelligence in the Union. Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements related, for example, to environmental sustainability, accessibility to persons with disability, stakeholders’ participation in the design and development of AI systems, and diversity of the development teams. The Commission may develop initiatives, including of a sectorial nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data for AI development, including on data access infrastructure, semantic and technical interoperability of different types of data.
It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus are not required to comply with the requirements set out herein are nevertheless safe when placed on the market or put into service. To contribute to this objective,the Directive 2001/95/EC of the European Parliament and of the Council 57 would apply as a safety net.
In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all parties involved in the application of this Regulation should respect the confidentiality of information and data obtained in carrying out their tasks.
Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation. The European Data Protection Supervisor should have the power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.
In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to amend the techniques and approaches referred to in Annex I to define AI systems, the Union harmonisation legislation listed in Annex II, the high-risk AI systems listed in Annex III, the provisions regarding technical documentation listed in Annex IV, the content of the EU declaration of conformity in Annex V, the provisions regarding the conformity assessment procedures in Annex VI and VII and the provisions establishing the high-risk AI systems to which the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation should apply. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making 58 . In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.
In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 of the European Parliament and of the Council 59 .
Since the objective of this Regulation cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
This Regulation should apply from … [OP — please insert the date established in Art. 85]. However, the infrastructure related to the governance and the conformity assessment system should be operational before that date, therefore the provisions on notified bodies and governance structure should apply from … [OP — please insert the date — three months following the entry into force of this Regulation]. In addition, Member States should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions on penalties should apply from [OP — please insert the date — twelve months following the entry into force of this Regulation].
The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance with Article 42(2) of Regulation (EU) 2018/1725 and delivered an opinion on […]“.
HAVE ADOPTED THIS REGULATION:
This Regulation lays down:
(a) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
(a) prohibitions of certain artificial intelligence practices;
(b) specific requirements for high-risk AI systems and obligations for operators of such systems;
(c) harmonised transparency rules for AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content;
(d) rules on market monitoring and surveillance.
(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country;
(b) users of AI systems located within the Union;
(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union;
(a) Regulation (EC) 300/2008;
(b) Regulation (EU) No 167/2013;
(c) Regulation (EU) No 168/2013;
(d) Directive 2014/90/EU;
(e) Directive (EU) 2016/797;
(f) Regulation (EU) 2018/858;
(g) Regulation (EU) 2018/1139;
(h) Regulation (EU) 2019/2144.
This Regulation shall not apply to AI systems developed or used exclusively for military purposes.
This Regulation shall not apply to public authorities in a third country nor to international organisations falling within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the framework of international agreements for law enforcement and judicial cooperation with the Union or with one or more Member States.
This Regulation shall not affect the application of the provisions on the liability of intermediary service providers set out in Chapter II, Section IV of Directive 2000/31/EC of the European Parliament and of the Council 60 [as to be replaced bythe corresponding provisions of the Digital Services Act].
For the purpose of this Regulation, the following definitions apply:
(1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
(2) ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;
(3) ‘small-scale provider’ means a provider that is a micro or small enterprise within the meaning of Commission Recommendation 2003/361/EC 61 ;
(4) ‘user’ means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;
(5) ‘authorised representative’ means any natural or legal person established in the Union who has received a written mandate from a provider of an AI system to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation;
(6) ‘importer’ means any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union;
(7) ‘distributor’ means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties;
(8) ‘operator’ means the provider, the user, the authorised representative, the importer and the distributor;
(9) ‘placing on the market’ means the first making available of an AI system on the Union market;
(10) ‘making available on the market’ means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;
(11) ‘putting into service’ means the supply of an AI system for first use directly to the user or for own use on the Union market for its intended purpose;
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems;
(14) ‘safety component of a product or system’ means a component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property;
(15) ‘instructions for use’ means the information provided by the provider to inform the user of in particular an AI system’s intended purpose and proper use, inclusive of the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used;
(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the provider of an AI system made available to users;
(17) ‘withdrawal of an AI system’ means any measure aimed at preventing the distribution, display and offer of an AI system;
(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;
(19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring;
(20) ‘conformity assessment’ means the process of verifying whether the requirements set out in Title III, Chapter 2 of this Regulation relating to an AI system have been fulfilled;
(21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including testing, certification and inspection;
(22) ‘notified body’ means a conformity assessment body designated in accordance with this Regulation and other relevant Union harmonisation legislation;
(23) ‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation or results in a modification to the intended purpose for which the AI system has been assessed;
(24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider indicates that an AI system is in conformity with the requirements set out in Title III, Chapter 2 of this Regulation and other applicable Union legislation harmonising the conditions for the marketing of products (‘Union harmonisation legislation’) providing for its affixing;
(25) ‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions;
(26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020;
(27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of Regulation (EU) No 1025/2012;
(28) ‘common specifications’ means a document, other than a standard, containing technical solutions providing a means to, comply with certain requirements and obligations established under this Regulation;
(29) ‘training data’ means data used for training an AI system through fitting its learnable parameters, including the weights of a neural network;
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process, among other things, in order to prevent overfitting; whereas the validation dataset can be a separate dataset or part of the training dataset, either as a fixed or variable split;
(31) ‘testing data’ means data used for providing an independent evaluation of the trained and validated AI system in order to confirm the expected performance of that system before its placing on the market or putting into service;
(32) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an output;
(33) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data;
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data;
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data;
(36) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified ;
(37) ”real-time’ remote biometric identification system’ means a remote biometric identification system whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay. This comprises not only instant identification, but also limited short delays in order to avoid circumvention.
(38) ”post’ remote biometric identification system’ means a remote biometric identification system other than a ‘real-time’ remote biometric identification system;
(39) ‘publicly accessible space’ means any physical place accessible to the public, regardless of whether certain conditions for access may apply;
(40) ‘law enforcement authority’ means:
(a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security; or
(b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;
(41) ‘law enforcement’ means activities carried out by law enforcement authorities for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security;
(42) ‘national supervisory authority’ means the authority to which a Member State assigns the responsibility for the implementation and application of this Regulation, for coordinating the activities entrusted to that Member State, for acting as the single contact point for the Commission, and for representing the Member State at the European Artificial Intelligence Board;
(43) ‘national competent authority’ means the national supervisory authority, the notifying authority and the market surveillance authority;
(44) ‘serious incident’ means any incident that directly or indirectly leads, might have led or might lead to any of the following:
(a) the death of a person or serious damage to a person’s health, to property or the environment,
(b) a serious and irreversible disruption of the management and operation of critical infrastructure.
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend the list of techniques and approaches listed in Annex I, in order to update that list to market and technological developments on the basis of characteristics that are similar to the techniques and approaches listed therein.
(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following:
(i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
(ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;
(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives:
(i) the targeted search for specific potential victims of crime, including missing children;
(ii) the prevention of a specific, substantial and imminentthreat to the life or physical safety of natural persons or of a terrorist attack;
(iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA 62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.
(a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm caused in the absence of the use of the system;
(b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the seriousness, probability and scale of those consequences.
In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement for any of the objectives referred to in paragraph 1 point d) shall comply with necessary and proportionate safeguards and conditions in relation to the use, in particular as regards the temporal, geographic and personal limitations.
The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the ‘real-time’ remote biometric identification system at issue is necessary for and proportionate to achieving one of the objectives specified in paragraph 1, point (d), as identified in the request. In deciding on the request, the competent judicial or administrative authority shall take into account the elements referred to in paragraph 2.
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II.
(a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III;
(b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.
(a) the intended purpose of the AI system;
(b) the extent to which an AI system has been used or is likely to be used;
(c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;
(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;
(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;
(f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age;
(g) the extent to which the outcome produced with an AI system is easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible;
(h) the extent to which existing Union legislation provides for:
(i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages;
(ii) effective measures to prevent or substantially minimise those risks.
High-risk AI systems shall comply with the requirements established in this Chapter.
The intended purpose of the high-risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.
A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
The risk management system shall consist of a continuous iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It shall comprise the following steps:
(a) identification and analysis of the known and foreseeable risks associated with each high-risk AI system;
(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;
(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;
(d) adoption of suitable risk management measures in accordance with the provisions of the following paragraphs.
The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interactions resulting from the combined application of the requirements set out in this Chapter 2. They shall take into account the generally acknowledged state of the art, including as reflected in relevant harmonised standards or common specifications.
The risk management measures referred to in paragraph 2, point (d) shall be such that any residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable, provided that the high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse. Those residual risks shall be communicated to the user.
In identifying the most appropriate risk management measures, the following shall be ensured:
(a) elimination or reduction of risks as far as possible through adequate design and development;
(b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated;
(c) provision of adequate information pursuant to Article 13, in particular as regards the risks referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to users.
In eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the user and the environment in which the system is intended to be used.
High-risk AI systems shall be tested for the purposes of identifying the most appropriate risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter.
Testing procedures shall be suitable to achieve the intended purpose of the AI system and do not need to go beyond what is necessary to achieve that purpose.
The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
When implementing the risk management system described in paragraphs 1 to 7, specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children.
For credit institutions regulated by Directive 2013/36/EU, the aspects described in paragraphs 1 to 8 shall be part of the risk management procedures established by those institutions pursuant to Article 74 of that Directive.
High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5.
Training, validation and testing data sets shall be subject to appropriate data governance and management practices. Those practices shall concern in particular,
(a) the relevant design choices;
(b) data collection;
(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;
(d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent;
(e) a prior assessment of the availability, quantity and suitability of the data sets that are needed;
(f) examination in view of possible biases;
(g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed.
Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used. These characteristics of the data sets may be met at the level of individual data sets or a combination thereof.
Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used.
To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal datareferred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.
Appropriate data governance and management practices shall apply for the development of high-risk AI systems other than those which make use of techniques involving the training of models in order to ensure that those high-risk AI systems comply with paragraph 2.
The technical documentation shall be drawn up in such a way to demonstrate that the high-risk AI system complies with the requirements set out in this Chapter and provide national competent authorities and notified bodies with all the necessary information to assess the compliance of the AI system with those requirements. It shall contain, at a minimum, the elements set out in Annex IV.
Where a high-risk AI system related to a product, to which the legal acts listed in Annex II, section A apply, is placed on the market or put into service one single technical documentation shall be drawn up containing all the information set out in Annex IV as well as the information required under those legal acts.
The Commission is empowered to adopt delegated acts in accordance with Article 73 to amend Annex IV where necessary to ensure that, in the light of technical progress, the technical documentation provides all the necessary information to assess the compliance of the system with the requirements set out in this Chapter.
High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications.
The logging capabilities shall ensure a level of traceability of the AI system’s functioning throughout its lifecycle that is appropriate to the intended purpose of the system.
In particular, logging capabilities shall enable the monitoring of the operation of the high-risk AI system with respect to the occurrence of situations that may result in the AI system presenting a risk within the meaning of Article 65(1) or lead to a substantial modification, and facilitate the post-market monitoring referred to in Article 61.
For high-risk AI systems referred to in paragraph 1, point (a) of Annex III, the logging capabilities shall provide, at a minimum:
(a) recording of the period of each use of the system (start date and time and end date and time of each use);
(b) the reference database against which input data has been checked by the system;
(c) the input data for which the search has led to a match;
(d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14 (5).
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately. An appropriate type and degree of transparency shall be ensured, with a view to achieving compliance with the relevant obligations of the user and of the provider set out in Chapter 3 of this Title.
High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that include concise, complete, correct and clear information that is relevant, accessible and comprehensible to users.
The information referred to in paragraph 2 shall specify:
(a) the identity and the contact details of the provider and, where applicable, of its authorised representative;
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:
(i) its intended purpose;
(ii) the level of accuracy, robustness and cybersecurity referred to in Article 15 against which the high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;
(iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental rights;
(iv) its performance as regards the persons or groups of persons on which the system is intended to be used;
(v) when appropriate, specifications for the input data, or any other relevant information in terms of the training, validation and testing data sets used, taking into account the intended purpose of the AI system.
(c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the moment of the initial conformity assessment, if any;
(d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users;
(e) the expected lifetime of the high-risk AI system and any necessary maintenance and care measures to ensure the proper functioning of that AI system, including as regards software updates.
High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
Human oversight shall aim at preventing or minimising the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter.
Human oversight shall be ensured through either one or all of the following measures:
(a) identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service;
(b) identified by the providerbefore placing the high-risk AI system on the market or putting it into service and that are appropriate to be implemented by the user.
(a) fully understand the capacities and limitations of the high-risk AI system and be able to duly monitor its operation, so that signs of anomalies, dysfunctions and unexpected performance can be detected and addressed as soon as possible;
(b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (‘automation bias’), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons;
(c) be able to correctly interpret the high-risk AI system’s output, taking into account in particular the characteristics of the system and the interpretation tools and methods available;
(d) be able to decide, in any particular situation, not to use the high-risk AI system or otherwise disregard, override or reverse the output of the high-risk AI system;
(e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a “stop” button or a similar procedure.
High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.
The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use.
High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems.
The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include backup or fail-safe plans.
High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (‘feedback loops’) are duly addressed with appropriate mitigation measures.
The technical solutions aimed at ensuring the cybersecurity of high-risk AI systems shall be appropriate to the relevant circumstances and the risks.
The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent and control for attacks trying to manipulate the training dataset (‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial examples’), or model flaws.
Providers of high-risk AI systems shall:
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title;
(b) have a quality management system in place which complies with Article 17;
(c) draw-up the technical documentation of the high-risk AI system;
(d) when under their control, keep the logs automatically generated by their high-risk AI systems;
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service;
(f) comply with the registration obligations referred to in Article 51;
(g) take the necessary corrective actions, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title;
(h) inform the national competent authorities of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken;
(i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49;
(j) upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
(a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;
(b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;
(c) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;
(d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;
(e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, the means to be used to ensure that the high-risk AI system complies with the requirements set out in Chapter 2 of this Title;
(f) systems and procedures for data management, including data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems;
(g) the risk management system referred to in Article 9;
(h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 61;
(i) procedures related to the reporting of serious incidents and of malfunctioning in accordance with Article 62;
(j) the handling of communication with national competent authorities, competent authorities, including sectoral ones, providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;
(k) systems and procedures for record keeping of all relevant documentation and information;
(l) resource management, including security of supply related measures;
(m) an accountability framework setting out the responsibilities of the management and other staff with regard to all aspects listed in this paragraph.
The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation.
For providers that are credit institutions regulated by Directive 2013/36/ EU, the obligation to put a quality management system in place shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive. In that context, any harmonised standards referred to in Article 40 of this Regulation shall be taken into account.
Providers of high-risk AI systems shall draw up the technical documentation referred to in Article 11 in accordance with Annex IV.
Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the technical documentation as part of the documentation concerning internal governance, arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service. Where the compliance of the AI systems with the requirements set out in Chapter 2 of this Title has been demonstrated following that conformity assessment, the providers shall draw up an EU declaration of conformity in accordance with Article 48 and affix the CE marking of conformity in accordance with Article 49.
For high-risk AI systems referred to in point 5(b) of Annex III that are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of high-risk AI system and applicable legal obligations under Union or national law.
Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation under Articles 74 of that Directive.
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken.
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high-risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
Where a high-risk AI system related to products to which the legal acts listed in Annex II, section A, apply, is placed on the market or put into service together with the product manufactured in accordance with those legal acts and under the name of the product manufacturer, the manufacturer of the product shall take the responsibility of the compliance of the AI system with this Regulation and, as far as the AI system is concerned, have the same obligations imposed by the present Regulation on the provider.
Prior to making their systems available on the Union market, where an importer cannot be identified, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
The authorised representative shall perform the tasks specified in the mandate received from the provider. The mandate shall empower the authorised representative to carry out the following tasks:
(a) keep a copy of the EU declaration of conformity and the technical documentation at the disposal of the national competent authorities and national authorities referred to in Article 63(7);
(b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law;
(c) cooperate with competent national authorities, upon a reasoned request, on any action the latter takes in relation to the high-risk AI system.
(a) the appropriate conformity assessment procedure has been carried out by the provider of that AI system
(b) the provider has drawn up the technical documentation in accordance with Annex IV;
(c) the system bears the required conformity marking and is accompanied by the required documentation and instructions of use.
Where an importer considers or has reason to consider that a high-risk AI system is not in conformity with this Regulation, it shall not place that system on the market until that AI system has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the importer shall inform the provider of the AI system and the market surveillance authorities to that effect.
Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, as applicable.
Importers shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Chapter 2 of this Title.
Importers shall provide national competent authorities, upon a reasoned request, with all necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by that national competent authority, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider by virtue of a contractual arrangement with the user or otherwise by law. They shall also cooperate with those authorities on any action national competent authority takes in relation to that system.
Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instruction of use, and that the provider and the importer of the system, as applicable, have complied with the obligations set out in this Regulation.
Where a distributor considers or has reason to consider that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system, as applicable, to that effect.
Distributors shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Chapter 2 of this Title.
A distributor that considers or has reason to consider that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
Upon a reasoned request from a national competent authority, distributors of high-risk AI systems shall provide that authority with all the information and documentation necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title. Distributors shall also cooperate with that national competent authority on any action taken by that authority.
(a) they place on the market or put into service a high-risk AI system under their name or trademark;
(b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service;
(c) they make a substantial modification to the high-risk AI system.
Users of high-risk AI systems shall use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5.
The obligations in paragraph 1 are without prejudice to other user obligations under Union or national law and to the user’s discretionin organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
Without prejudice to paragraph 1, to the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system.
Users shall monitor the operation of the high-risk AI system on the basis of the instructions of use. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall inform the provider or distributor and suspend the use of the system. They shall also inform the provider or distributor when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. In case the user is not able to reach the provider, Article 62 shall apply mutatis mutandis.
For users that are credit institutions regulated by Directive 2013/36/EU, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
Users that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs as part of the documentation concerning internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
Each Member State shall designate or establish a notifying authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.
Member States may designate a national accreditation body referred to in Regulation (EC) No 765/2008 as a notifying authority.
Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with conformity assessment bodies and the objectivity and impartiality of their activities are safeguarded.
Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.
Notifying authorities shall not offer or provide any activities that conformity assessment bodies perform or any consultancy services on a commercial or competitive basis.
Notifying authorities shall safeguard the confidentiality of the information they obtain.
Notifying authorities shall have a sufficient number of competent personnel at their disposal for the proper performance of their tasks.
Notifying authorities shall make sure that conformity assessments are carried out in a proportionate manner, avoiding unnecessary burdens for providers and that notified bodies perform their activities taking due account of the size of an undertaking, the sector in which it operates, its structure and the degree of complexity of the AI system in question.
Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member State in which they are established.
The application for notification shall be accompanied by a description of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies for which the conformity assessment body claims to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting that the conformity assessment body fulfils the requirements laid down in Article 33. Any valid document related to existing designations of the applicant notified body under any other Union harmonisation legislation shall be added.
Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the notifying authority with the documentary evidence necessary for the verification, recognition and regular monitoring of its compliance with the requirements laid down in Article 33. For notified bodies which are designated under any other Union harmonisation legislation, all documents and certificates linked to those designations may be used to support their designation procedure under this Regulation, as appropriate.
Notifying authorities may notify only conformity assessment bodies which have satisfied the requirements laid down in Article 33.
Notifying authorities shall notify the Commission and the other Member States using the electronic notification tool developed and managed by the Commission.
The notification shall include full details of the conformity assessment activities, the conformity assessment module or modules and the artificial intelligence technologies concerned.
The conformity assessment body concerned may perform the activities of a notified body only where no objections are raised by the Commission or the other Member States within one month of a notification.
Notifying authorities shall notify the Commission and the other Member States of any subsequent relevant changes to the notification.
Notified bodies shall verify the conformity of high-risk AI system in accordance with the conformity assessment procedures referred to in Article 43.
Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are necessary to fulfil their tasks.
The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall be such as to ensure that there is confidence in the performance by and in the results of the conformity assessment activities that the notified bodies conduct.
Notified bodies shall be independent of the provider of a high-risk AI system in relation to which it performs conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic interest in the high-risk AI system that is assessed, as well as of any competitors of the provider.
Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities.
Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, subcontractors and any associated body or personnel of external bodies respect the confidentiality of the information which comes into their possession during the performance of conformity assessment activities, except when disclosure is required by law. The staff of notified bodies shall be bound to observe professional secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the notifying authorities of the Member State in which their activities are carried out.
Notified bodies shall have procedures for the performance of activities which take due account of the size of an undertaking, the sector in which it operates, its structure, the degree of complexity of the AI system in question.
Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability is assumed by the Member State concerned in accordance with national law or that Member State is directly responsible for the conformity assessment.
Notified bodies shall be capable of carrying out all the tasks falling to them under this Regulation with the highest degree of professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified bodies themselves or on their behalf and under their responsibility.
Notified bodies shall have sufficient internal competences to be able to effectively evaluate the tasks conducted by external parties on their behalf. To that end, at all times and for each conformity assessment procedure and each type of high-risk AI system in relation to which they have been designated, the notified body shall have permanent availability of sufficient administrative, technical and scientific personnel who possess experience and knowledge relating to the relevant artificial intelligence technologies, data and data computing and to the requirements set out in Chapter 2 of this Title.
Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part directly or be represented in European standardisation organisations, or ensure that they are aware and up to date in respect of relevant standards.
Notified bodies shall make available and submit upon request all relevant documentation, including the providers’ documentation, to the notifying authority referred to in Article 30 to allow it to conduct its assessment, designation, notification, monitoring and surveillance activities and to facilitate the assessment outlined in this Chapter.
Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 33 and shall inform the notifying authority accordingly.
Notified bodies shall take full responsibility for the tasks performed by subcontractors or subsidiaries wherever these are established.
Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider.
Notified bodies shall keep at the disposal of the notifying authority the relevant documents concerning the assessment of the qualifications of the subcontractor or the subsidiary and the work carried out by them under this Regulation.
The Commission shall assign an identification number to notified bodies. It shall assign a single number, even where a body is notified under several Union acts.
The Commission shall make publicly available the list of the bodies notified under this Regulation, including the identification numbers that have been assigned to them and the activities for which they have been notified. The Commission shall ensure that the list is kept up to date.
Where a notifying authority has suspicions or has been informed that a notified body no longer meets the requirements laid down in Article 33, or that it is failing to fulfil its obligations, that authority shall without delay investigate the matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body investigation no longer meets the requirements laid down in Article 33 or that it is failing to fulfil its obligations, it shall restrict, suspend or withdraw the notification as appropriate, depending on the seriousness of the failure. It shall also immediately inform the Commission and the other Member States accordingly.
In the event of restriction, suspension or withdrawal of notification, or where the notified body has ceased its activity, the notifying authority shall take appropriate steps to ensure that the files of that notified body are either taken over by another notified body or kept available for the responsible notifying authorities at their request.
The Commission shall, where necessary, investigate all cases where there are reasons to doubt whether a notified body complies with the requirementslaid down in Article 33.
The Notifying authority shall provide the Commission, on request, with all relevant information relating to the notification of the notified body concerned.
The Commission shall ensure that all confidential information obtained in the course of its investigations pursuant to this Article is treated confidentially.
Where the Commission ascertains that a notified body does not meet or no longer meets the requirementslaid down in Article 33, it shall adopt a reasoned decision requesting the notifying Member State to take the necessary corrective measures, including withdrawal of notification if necessary. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 74(2).
The Commission shall ensure that, with regard to the areas covered by this Regulation, appropriate coordination and cooperation between notified bodies active in the conformity assessment procedures of AI systems pursuant to this Regulation are put in place and properly operated in the form of a sectoral group of notified bodies.
Member States shall ensure that the bodies notified by them participate in the work of that group, directly or by means of designated representatives.
Conformity assessment bodies established under the law of a third country with which the Union has concluded an agreement may be authorised to carry out the activities of notified Bodies under this Regulation.
High-risk AI systems which are in conformity with harmonised standards or parts thereof the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those standards cover those requirements.
Where harmonised standards referred to in Article 40 do not exist or where the Commission considers that the relevant harmonised standards are insufficient or that there is a need to address specific safety or fundamental right concerns, the Commission may, by means of implementing acts, adopt common specifications in respect of the requirements set out in Chapter 2 of this Title. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
The Commission, when preparing the common specifications referred to in paragraph 1, shall gather the views of relevant bodies or expert groups established under relevant sectorial Union law.
High-risk AI systems which are in conformity with the common specifications referred to in paragraph 1 shall be presumed to be in conformity with the requirements set out in Chapter 2 of this Title, to the extent those common specifications cover those requirements.
Where providers do not comply with the common specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that are at least equivalent thereto.
Taking into account their intended purpose, high-risk AI systems that have been trained and tested on data concerning the specific geographical, behavioural and functional setting within which they are intended to be used shall be presumed to be in compliance with the requirement set out in Article 10(4).
High-risk AI systems that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme pursuant to Regulation (EU) 2019/881 of the European Parliament and of the Council 63 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in compliance with the cybersecurity requirements set out in Article 15 of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those requirements.
(a) the conformity assessment procedure based on internal control referred to in Annex VI;
(b) the conformity assessment procedure based on assessment of the quality management system and assessment of the technical documentation, with the involvement of a notified body, referred to in Annex VII.
Where, in demonstrating the compliance of a high-risk AI system with the requirements set out in Chapter 2 of this Title, the provider has not applied or has applied only in part harmonised standards referred to in Article 40, or where such harmonised standards do not exist and common specifications referred to in Article 41 are not available, the provider shall follow the conformity assessment procedure set out in Annex VII.
For the purpose of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the notified bodies. However, when the system is intended to be put into service by law enforcement, immigration or asylum authorities as well as EU institutions, bodies or agencies, the market surveillance authority referred to in Article 63(5) or (6), as applicable, shall act as a notified body.
For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified body. For high-risk AI systems referred to in point 5(b) of Annex III, placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU, the conformity assessment shall be carried out as part of the procedure referred to in Articles 97 to101 of that Directive.
For high-risk AI systems, to which legal acts listed in Annex II, section A, apply, the provider shall follow the relevant conformity assessment as required under those legal acts. The requirements set out in Chapter 2 of this Title shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. and the fifth paragraph of point 4.6 of Annex VII shall also apply.
For the purpose of that assessment, notified bodies which have been notified under those legal acts shall be entitled to control the conformity of the high-risk AI systems with the requirements set out in Chapter 2 of this Title, provided that the compliance of those notified bodies with requirements laid down in Article 33(4), (9) and (10) has been assessed in the context of the notification procedure under those legal acts.
Where the legal acts listed in Annex II, section A, enable the manufacturer of the product to opt out from a third-party conformity assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, that manufacturer may make use of that option only if he has also applied harmonised standards or, where applicable, common specifications referred to in Article 41, covering the requirements set out in Chapter 2 of this Title.
For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification.
The Commission is empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating Annexes VI and Annex VII in order to introduce elements of the conformity assessment procedures that become necessary in light of technical progress.
The Commission is empowered to adopt delegated acts to amend paragraphs 1 and 2 in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in preventing or minimizing the risks to health and safety and protection of fundamental rights posed by such systems as well as the availability of adequate capacities and resources among notified bodies.
Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in an official Union language determined by the Member State in which the notified body is established or in an official Union language otherwise acceptable to the notified body.
Certificates shall be valid for the period they indicate, which shall not exceed five years. On application by the provider, the validity of a certificate may be extended for further periods, each not exceeding five years, based on a re-assessment in accordance with the applicable conformity assessment procedures.
Where a notified body finds that an AI system no longer meets the requirements set out in Chapter 2 of this Title, it shall, taking account of the principle of proportionality, suspend or withdraw the certificate issued or impose any restrictions on it, unless compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.
Member States shall ensure that an appeal procedure against decisions of the notified bodies is available to parties having a legitimate interest in that decision.
(a) any Union technical documentation assessment certificates, any supplements to those certificates, quality management system approvals issued in accordance with the requirements of Annex VII;
(b) any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality management system approval issued in accordance with the requirements of Annex VII;
(c) any circumstances affecting the scope of or conditions for notification;
(d) any request for information which they have received from market surveillance authorities regarding conformity assessment activities;
(e) on request, conformity assessment activities performed within the scope of their notification and any other activity performed, including cross-border activities and subcontracting.
(a) quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality system approvals which it has issued;
(b) EU technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued.
By way of derogation from Article 43, any market surveillance authority may authorise the placing on the market or putting into service of specific high-risk AI systems within the territory of the Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, environmental protection and the protection of key industrial and infrastructural assets. That authorisation shall be for a limited period of time, while the necessary conformity assessment procedures are being carried out, and shall terminate once those procedures have been completed. The completion of those procedures shall be undertaken without undue delay.
The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that the high-risk AI system complies with the requirements of Chapter 2 of this Title. The market surveillance authority shall inform the Commission and the other Member States of any authorisation issued pursuant to paragraph 1.
Where, within 15 calendar days of receipt of the information referred to in paragraph 2, no objection has been raised by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.
Where, within 15 calendar days of receipt of the notification referred to in paragraph 2, objections are raised by a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the Commission considers the authorisation to be contrary to Union law or the conclusion of the Member States regarding the compliance of the system as referred to in paragraph 2 to be unfounded, the Commission shall without delay enter into consultation with the relevant Member State; the operator(s) concerned shall be consulted and have the possibility to present their views. In view thereof, the Commission shall decide whether the authorisation is justified or not. The Commission shall address its decision to the Member State concerned and the relevant operator or operators.
If the authorisation is considered unjustified, this shall be withdrawn by the market surveillance authority of the Member State concerned.
By way of derogation from paragraphs 1 to 5, for high-risk AI systems intended to be used as safety components of devices, or which are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, Article 59 of Regulation (EU) 2017/745 and Article 54 of Regulation (EU) 2017/746 shall apply also with regard to the derogation from the conformity assessment of the compliance with the requirements set out in Chapter 2 of this Title.
The provider shall draw up a written EU declaration of conformity for each AI system and keep it at the disposal of the national competent authorities for 10 years after the AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be given to the relevant national competent authorities upon request.
The EU declaration of conformity shall state that the high-risk AI system in question meets the requirements set out in Chapter 2 of this Title. The EU declaration of conformity shall contain the information set out in Annex V and shall be translated into an official Union language or languages required by the Member State(s) in which the high-risk AI system is made available.
Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union legislations applicable to the high-risk AI system. The declaration shall contain all the information required for identification of the Union harmonisation legislation to which the declaration relates.
By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the requirements set out in Chapter 2 of this Title. The provider shall keep the EU declaration of conformity up-to-date as appropriate.
The Commission shall be empowered to adopt delegated acts in accordance with Article 73 for the purpose of updating the content of the EU declaration of conformity set out in Annex V in order to introduce elements that become necessary in light of technical progress.
The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the accompanying documentation, as appropriate.
The CE marking referred to in paragraph 1 of this Article shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.
Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for the conformity assessment procedures set out in Article 43. The identification number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the requirements for CE marking.
The provider shall, for a period ending 10 years after the AI system has been placed on the market or put into service, keep at the disposal of the national competent authorities:
(a) the technical documentation referred to in Article 11;
(b) the documentation concerning the quality management system referred to Article 17;
(c) the documentation concerning the changes approved by notified bodies where applicable;
(d) the decisions and other documents issued by the notified bodies where applicable;
(e) the EU declaration of conformity referred to in Article 48.
Before placing on the market or putting into service a high-risk AI system referred to in Article 6(2), the provider or, where applicable, the authorised representative shall register that system in the EU database referred to in Article 60.
Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
AI regulatory sandboxes established by one or more Member States competent authorities or the European Data Protection Supervisor shall provide a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into servicepursuant to a specific plan. This shall take place under the direct supervision and guidance by the competent authorities with a view to ensuring compliance with the requirements of this Regulation and, where relevant, other Union and Member States legislation supervised within the sandbox.
Member States shall ensure that to the extent the innovative AI systems involve the processing of personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing or supporting access to data, the national data protection authorities and those other national authorities are associated to the operation of the AI regulatory sandbox.
The AI regulatory sandboxes shall not affect the supervisory and corrective powers of the competent authorities. Any significant risks to health and safety and fundamental rights identified during the development and testing of such systems shall result in immediate mitigation and, failing that, in the suspension of the development and testing process until such mitigation takes place.
Participants in the AI regulatory sandbox shall remain liable under applicable Union and Member States liability legislation for any harm inflicted on third parties as a result from the experimentation taking place in the sandbox.
Member States’ competent authorities that have established AI regulatory sandboxes shall coordinate their activities and cooperate within the framework of the European Artificial Intelligence Board. They shall submit annual reports to the Board and the Commission on the results from the implementation of those scheme, including good practices, lessons learnt and recommendations on their setup and, where relevant, on the application of this Regulation and other Union legislation supervised within the sandbox.
The modalities and the conditions of the operation of the AI regulatory sandboxes, including the eligibility criteria and the procedure for the application, selection, participation and exiting from the sandbox, and the rights and obligations of the participants shall be set out in implementing acts. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74(2).
(a) the innovative AI systems shall be developed for safeguarding substantial public interest in one or more of the following areas:
(i) the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including the safeguarding against and the prevention of threats to public security, under the control and responsibility of the competent authorities. The processing shall be based on Member State or Union law;
(ii) public safety and public health, including disease prevention, control and treatment;
(iii) a high level of protection and improvement of the quality of the environment;
(b) the data processed are necessary for complying with one or more of the requirements referred to in Title III, Chapter 2 where those requirements cannot be effectively fulfilled by processing anonymised, synthetic or other non-personal data;
(c) there are effective monitoring mechanisms to identify if any high risks to the fundamental rights of the data subjects may arise during the sandbox experimentation as well as response mechanism to promptly mitigate those risks and, where necessary, stop the processing;
(d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected data processing environment under the control of the participants and only authorised persons have access to that data;
(e) any personal data processed are not be transmitted, transferred or otherwise accessed by other parties;
(f) any processing of personal data in the context of the sandbox do not lead to measures or decisions affecting the data subjects;
(g) any personal data processed in the context of the sandbox are deleted once the participation in the sandbox has terminated or the personal data has reached the end of its retention period;
(h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in the sandbox and 1 year after its termination, solely for the purpose of and only as long as necessary for fulfilling accountability and documentation obligations under this Article or other application Union or Member States legislation;
(i) complete and detailed description of the process and rationale behind the training, testing and validation of the AI system is kept together with the testing results as part of the technical documentation in Annex IV;
(j) a short summary of the AI project developed in the sandbox, its objectives and expected results published on the website of the competent authorities.
(a) provide small-scale providers and start-ups with priority access to the AI regulatory sandboxes to the extent that they fulfil the eligibility conditions;
(b) organise specific awareness raising activities about the application of this Regulation tailored to the needs of the small-scale providers and users;
(c) where appropriate, establish a dedicated channel for communication with small-scale providers and user and other innovators to provide guidance and respond to queries about the implementation of this Regulation.
A ‘European Artificial Intelligence Board’ (the ‘Board’) is established.
The Board shall provide advice and assistance to the Commission in order to:
(a) contribute to the effective cooperation of the national supervisory authorities and the Commission with regard to matters covered by this Regulation;
(b) coordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;
(c) assist the national supervisory authorities and the Commission in ensuring the consistent application of this Regulation.
The Board shall be composed of the national supervisory authorities, who shall be represented by the head or equivalent high-level official of that authority, and the European Data Protection Supervisor. Other national authorities may be invited to the meetings, where the issues discussed are of relevance for them.
The Board shall adopt its rules of procedure by a simple majority of its members, following the consent of the Commission. The rules of procedure shall also contain the operational aspects related to the execution of the Board’s tasks as listed in Article 58. The Board may establish sub-groups as appropriatefor the purpose of examining specific questions.
The Board shall be chaired by the Commission. The Commission shall convene the meetings and prepare the agenda in accordance with the tasks of the Board pursuant to this Regulation and with its rules of procedure. The Commission shall provide administrative and analytical support for the activities of the Board pursuant to this Regulation.
The Board may invite external experts and observers to attend its meetings and may hold exchanges with interested third parties to inform its activities to an appropriate extent. To that end the Commission may facilitate exchanges between the Board and other Union bodies, offices, agencies and advisory groups.
When providing advice and assistance to the Commission in the context of Article 56(2), the Board shall in particular:
(a) collect and share expertise and best practices among Member States;
(b) contribute to uniform administrative practices in the Member States, including for the functioning of regulatory sandboxes referred to in Article 53;
(c) issue opinions, recommendations or written contributions on matters related to the implementation of this Regulation, in particular
(i) on technical specifications or existing standards regarding the requirements set out in Title III, Chapter 2,
(ii) on the use of harmonised standards or common specifications referred to in Articles 40 and 41,
(iii) on the preparation of guidance documents, including the guidelines concerning the setting of administrative fines referred to in Article 71.
National competent authorities shall be established or designated by each Member State for the purpose of ensuring the application and implementation of this Regulation. National competent authorities shall be organised so as to safeguard the objectivity and impartiality of their activities and tasks.
Each Member State shall designate a national supervisory authority among the national competent authorities. The national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organisational and administrative reasons to designate more than one authority.
Member States shall inform the Commission of their designation or designations and, where applicable, the reasons for designating more than one authority.
Member States shall ensure that national competent authorities are provided with adequate financial and human resources to fulfil their tasks under this Regulation. In particular, national competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements.
Member States shall report to the Commission on an annual basis on the status of the financial and human resources of the national competent authorities with an assessment of their adequacy. The Commission shall transmit that information to the Board for discussion and possible recommendations.
The Commission shall facilitate the exchange of experience between national competent authorities.
National competent authorities may provide guidance and advice on the implementation of this Regulation, including to small-scale providers. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system in areas covered by other Union legislation, the competent national authorities under that Union legislation shall be consulted, as appropriate. Member States may also establish one central contact point for communication with operators.
When Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as the competent authority for their supervision.
The Commission shall, in collaboration with the Member States, set up and maintain a EU database containing information referred to in paragraph 2 concerning high-risk AI systems referred to in Article 6(2) which are registered in accordance with Article 51.
The data listed in Annex VIII shall be entered into the EU database by the providers. The Commission shall provide them with technical and administrative support.
Information contained in the EU database shall be accessible to the public.
The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this Regulation. That information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider.
The Commission shall be the controller of the EU database. It shall also ensure to providers adequate technical and administrative support.
Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system.
The post-market monitoring system shall actively and systematically collect, document and analyse relevant data provided by users or collected through other sources on the performance of high-risk AI systems throughout their lifetime, and allow the provider to evaluate the continuous compliance of AI systems with the requirements set out in Title III, Chapter 2.
The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be included in the plan.
For high-risk AI systems covered by the legal acts referred to in Annex II, where a post-market monitoring system and plan is already established under that legislation, the elements described in paragraphs 1, 2 and 3 shall be integrated into that system and plan as appropriate.
The first subparagraph shall also apply to high-risk AI systems referred to in point 5(b) of Annex III placed on the market or put into service by credit institutions regulated by Directive 2013/36/EU.
Such notification shall be made immediately after the provider has established a causal link between the AI system and the incident or malfunctioning or the reasonable likelihood of such a link, and, in any event, not later than 15 days after the providers becomes aware of the serious incident or of the malfunctioning.
Upon receiving a notification related to a breach of obligations under Union law intended to protect fundamental rights, the market surveillance authority shall inform the national public authorities or bodies referred to in Article 64(3). The Commission shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1. That guidance shall be issued 12 months after the entry into force of this Regulation, at the latest.
For high-risk AI systems referred to in point 5(b) of Annex III which are placed on the market or put into service by providers that are credit institutions regulated by Directive 2013/36/EU and for high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulation (EU) 2017/745 and Regulation (EU) 2017/746, the notification of serious incidents or malfunctioning shall be limited to those that that constitute a breach of obligations under Union law intended to protect fundamental rights.
(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators identified in Title III, Chapter 3 of this Regulation;
(b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling within the scope of this Regulation.
The national supervisory authority shall report to the Commission on a regular basis the outcomes of relevant market surveillance activities. The national supervisory authority shall report, without delay, to the Commission and relevant national competition authorities any information identified in the course of market surveillance activities that may be of potential interest for the application of Union law on competition rules.
For high-risk AI systems, related to products to which legal acts listed in Annex II, section A apply, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market surveillance activities designated under those legal acts.
For AI systems placed on the market, put into service or used by financial institutions regulated by Union legislation on financial services, the market surveillance authority for the purposes of this Regulation shall be the relevant authority responsible for the financial supervision of those institutions under that legislation.
For AI systems listed in point 1(a) in so far as the systems are used for law enforcement purposes, points 6 and 7 of Annex III, Member States shall designate as market surveillance authorities for the purposes of this Regulation either the competent data protection supervisory authorities under Directive (EU) 2016/680, or Regulation 2016/679 or the national competent authorities supervising the activities of the law enforcement, immigration or asylum authorities putting into service or using those systems.
Where Union institutions, agencies and bodies fall within the scope of this Regulation, the European Data Protection Supervisor shall act as their market surveillance authority.
Member States shall facilitate the coordination between market surveillance authorities designated under this Regulation and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed in Annex II or other Union legislation that might be relevant for the high-risk AI systems referred to in Annex III.
Access to data and documentation in the context of their activities, the market surveillance authorities shall be granted full access to the training, validation and testing datasets used by the provider, including through application programming interfaces (‘API’) or other appropriate technical means and tools enabling remote access.
Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the market surveillance authorities shall be granted access to the source code of the AI system.
National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights in relation to the use of high-risk AI systems referred to in Annex III shall have the power to request and access any documentation created or maintained under this Regulation when access to that documentation is necessary for the fulfilment of the competences under their mandate within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance authority of the Member State concerned of any such request.
By 3 months after the entering into force of this Regulation, each Member State shall identify the public authorities or bodies referred to in paragraph 3 and make a list publicly available on the website of the national supervisory authority. Member States shall notify the list to the Commission and all other Member States and keep the list up to date.
Where the documentation referred to in paragraph 3 is insufficient to ascertain whether a breach of obligations under Union law intended to protect fundamental rights has occurred, the public authority or body referred to paragraph 3 may make a reasoned request to the market surveillance authority to organise testing of the high-risk AI system through technical means. The market surveillance authority shall organise the testing with the close involvement of the requesting public authority or body within reasonable time following the request.
Any information and documentation obtained by the national public authorities or bodies referred to in paragraph 3 pursuant to the provisions of this Article shall be treated in compliance with the confidentiality obligations set out in Article 70.
AI systems presenting a risk shall be understood as a product presenting a risk defined in Article 3, point 19 of Regulation (EU) 2019/1020 insofar as risks to the health or safety or to the protection of fundamental rights of persons are concerned.
Where the market surveillance authority of a Member State has sufficient reasons to consider that an AI system presents a risk as referred to in paragraph 1, they shall carry out an evaluation of the AI system concerned in respect of its compliance with all the requirements and obligations laid down in this Regulation. When risks to the protection of fundamental rights are present, the market surveillance authority shall also inform the relevant national public authorities or bodies referred to in Article 64(3). The relevant operators shall cooperate as necessary with the market surveillance authorities and the other national public authorities or bodies referred to in Article 64(3).
Where, in the course of that evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) 2019/1020 shall apply to the measures referred to in the second subparagraph.
Where the market surveillance authority considers that non-compliance is not restricted to its national territory, it shall inform the Commission and the other Member States of the results of the evaluation and of the actions which it has required the operator to take.
The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that it has made available on the market throughout the Union.
Where the operator of an AI system does not take adequate corrective action within the period referred to in paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI system’s being made available on its national market, to withdraw the product from that market or to recall it. That authority shall inform the Commission and the other Member States, without delay, of those measures.
The information referred to in paragraph 5 shall include all available details, in particular the data necessary for the identification of the non-compliant AI system, the origin of the AI system, the nature of the non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the non-compliance is due to one or more of the following:
(a) a failure of the AI system to meet requirements set out in Title III, Chapter 2;
(b) shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring a presumption of conformity.
The market surveillance authorities of the Member States other than the market surveillance authority of the Member State initiating the procedure shall without delay inform the Commission and the other Member States of any measures adopted and of any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of disagreement with the notified national measure, of their objections.
Where, within three months of receipt of the information referred to in paragraph 5, no objection has been raised by either a Member State or the Commission in respect of a provisional measure taken by a Member State, that measure shall be deemed justified. This is without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation (EU) 2019/1020.
The market surveillance authorities of all Member States shall ensure that appropriate restrictive measures are taken in respect of the product concerned, such as withdrawal of the product from their market, without delay.
Where, within three months of receipt of the notification referred to in Article 65(5), objections are raised by a Member State against a measure taken by another Member State, or where the Commission considers the measure to be contrary to Union law, the Commission shall without delay enter into consultation with the relevant Member State and operator or operators and shall evaluate the national measure. On the basis of the results of that evaluation, the Commission shall decide whether the national measure is justified or not within 9 months from the notification referred to in Article 65(5) and notify such decision to the Member State concerned.
If the national measure is considered justified, all Member States shall take the measures necessary to ensure that the non-compliant AI system is withdrawn from their market, and shall inform the Commission accordingly. If the national measure is considered unjustified, the Member State concerned shall withdraw the measure.
Where the national measure is considered justified and the non-compliance of the AI system is attributed to shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.
Where, having performed an evaluation under Article 65, the market surveillance authority of a Member State finds that although an AI system is in compliance with this Regulation, it presents a risk to the health or safety of persons, to the compliance with obligations under Union or national law intended to protect fundamental rights or to other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure that the AI system concerned, when placed on the market or put into service, no longer presents that risk, to withdraw the AI system from the market or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe.
The provider or other relevant operators shall ensure that corrective action is taken in respect of all the AI systems concerned that they have made available on the market throughout the Union within the timeline prescribed by the market surveillance authority of the Member State referred to in paragraph 1.
The Member State shall immediately inform the Commission and the other Member States. That information shall include all available details, in particular the data necessary for the identification of the AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and duration of the national measures taken.
The Commission shall without delay enter into consultation with the Member States and the relevant operator and shall evaluate the national measures taken. On the basis of the results of that evaluation, the Commission shall decide whether the measure is justified or not and, where necessary, propose appropriate measures.
The Commission shall address its decision to the Member States.
(a) the conformity marking has been affixed in violation of Article 49;
(b) the conformity marking has not been affixed;
(c) the EU declaration of conformity has not been drawn up;
(d) the EU declaration of conformity has not been drawn up correctly;
(e) the identification number of the notified body, which is involved in the conformity assessment procedure, where applicable, has not been affixed;
The Commission and the Member States shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2 on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.
The Commission and the Board shall encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems of requirements related for example to environmental sustainability, accessibility for persons with a disability, stakeholders participation in the design and development of the AI systems and diversity of development teams on the basis of clear objectives and key performance indicators to measure the achievement of those objectives.
Codes of conduct may be drawn up by individual providers of AI systems or by organisations representing them or by both, including with the involvement of users and any interested stakeholders and their representative organisations. Codes of conduct may cover one or more AI systems taking into account the similarity of the intended purpose of the relevant systems.
The Commission and the Board shall take into account the specific interests and needs of the small-scale providers and start-ups when encouraging and facilitating the drawing up of codes of conduct.
(a) intellectual property rights, and confidential business information or trade secrets of a natural or legal person, including source code, except the cases referred to in Article 5 of Directive 2016/943 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure apply.
(b) the effective implementation of this Regulation, in particular for the purpose of inspections, investigations or audits;(c) public and national security interests;
(c) integrity of criminal or administrative proceedings.
When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in points 1, 6 and 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 63(5) and (6), as applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or any copy thereof.
Paragraphs 1 and 2 shall not affect the rights and obligations of the Commission, Member States and notified bodies with regard to the exchange of information and the dissemination of warnings, nor the obligations of the parties concerned to provide information under criminal law of the Member States.
The Commission and Member States may exchange, where necessary, confidential information with regulatory authorities of third countries with which they have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.
In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-up and their economic viability.
The Member States shall notify the Commission of those rules and of those measures and shall notify it, without delay, of any subsequent amendment affecting them.
The following infringements shall be subject to administrative fines of up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;
(b) non-compliance of the AI system with the requirements laid down in Article 10.
The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 EUR or, if the offender is a company, up to 4 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
When deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation shall be taken into account and due regard shall be given to the following:
(a) the nature, gravity and duration of the infringement and of its consequences;
(b) whether administrative fines have been already applied by other market surveillance authorities to the same operator for the same infringement.
(c) the size and market share of the operator committing the infringement;
Each Member State shall lay down rules on whether and to what extent administrative fines may be imposed on public authorities and bodies established in that Member State.
Depending on the legal system of the Member States, the rules on administrative fines may be applied in such a manner that the fines are imposed by competent national courts of other bodies as applicable in those Member States. The application of such rules in those Member States shall have an equivalent effect.
(a) the nature, gravity and duration of the infringement and of its consequences;
(b) the cooperation with the European Data Protection Supervisor in order to remedy the infringement and mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously ordered by the European Data Protection Supervisor against the Union institution or agency or body concerned with regard to the same subject matter;
(c) any similar previous infringements by the Union institution, agency or body;
(a) non-compliance with the prohibition of the artificial intelligence practices referred to in Article 5;
(b) non-compliance of the AI system with the requirements laid down in Article 10.
The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 250 000 EUR.
Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the Union institution, agency or body which is the subject of the proceedings conducted by the European Data Protection Supervisor the opportunity of being heard on the matter regarding the possible infringement. The European Data Protection Supervisor shall base his or her decisions only on elements and circumstances on which the parties concerned have been able to comment. Complainants, if any, shall be associated closely with the proceedings.
The rights of defense of the parties concerned shall be fully respected in the proceedings. They shall be entitled to have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or undertakings in the protection of their personal data or business secrets.
Funds collected by imposition of fines in this Article shall be the income of the general budget of the Union.
The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.
The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall be conferred on the Commission for an indeterminate period of time from [entering into force of the Regulation].
The delegation of power referred to in Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) may be revoked at any time by the European Parliament or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified therein. It shall not affect the validity of any delegated acts already in force.
As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to the Council.
Any delegated act adopted pursuant to Article 4, Article 7(1), Article 11(3), Article 43(5) and (6) and Article 48(5) shall enter into force only if no objection has been expressed by either the European Parliament or the Council within a period of three months of notification of that act to the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council have both informed the Commission that they will not object. That period shall be extended by three months at the initiative of the European Parliament or of the Council.
The Commission shall be assisted by a committee. That committee shall be a committee within the meaning of Regulation (EU) No 182/2011.
Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/2011 shall apply.
In Article 4(3) of Regulation (EC) No 300/2008, the following subparagraph is added:
“When adopting detailed measures related to technical specifications and procedures for approval and use of security equipment concerning Artificial Intelligence systems in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Chapter 2, Title III of that Regulation shall be taken into account.”
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
In Article 17(5) of Regulation (EU) No 167/2013, the following subparagraph is added:
“When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
In Article 22(5) of Regulation (EU) No 168/2013, the following subparagraph is added:
“When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX on [Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
In Article 8 of Directive 2014/90/EU, the following paragraph is added:
“4. For Artificial Intelligence systemswhich are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, when carrying out its activities pursuant to paragraph 1 and when adopting technical specifications and testing standards in accordance with paragraphs 2 and 3, the Commission shall take into account the requirements set out in Title III, Chapter 2 of that Regulation.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).“.
In Article 5 of Directive (EU) 2016/797, the following paragraph is added:
“12. When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11 concerning Artificial Intelligence systemswhich are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).“.
In Article 5 of Regulation (EU) 2018/858 the following paragraph is added:
“4. When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council *, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).“.
Regulation (EU) 2018/1139 is amended as follows:
“3. Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).”
“4. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”
“4. When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”
“3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”
“When adopting those implementing acts concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence], the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”
“3. When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] , the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.”.
In Article 11 of Regulation (EU) 2019/2144, the following paragraph is added:
“3. When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence systems which are safety components in the meaning of Regulation (EU) YYY/XX [on Artificial Intelligence] of the European Parliament and of the Council*, the requirements set out in Title III, Chapter 2 of that Regulation shall be taken into account.
__________
* Regulation (EU) YYY/XX [on Artificial Intelligence] (OJ …).“.
The requirements laid down in this Regulation shall be taken into account, where applicable, in the evaluation of each large-scale IT systems established by the legal acts listed in Annex IX to be undertaken as provided for in those respective acts.
The Commission shall assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation.
By [three years after the date of application of this Regulation referred to in Article 85(2)] and every four years thereafter, the Commission shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
The reports referred to in paragraph 2 shall devote specific attention to the following:
(a) the status of the financial and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
(b) the state of penalties, and notably administrative fines as referred to in Article 71(1), applied by Member States to infringements of the provisions of this Regulation.
Within [three years after the date of application of this Regulation referred to in Article 85(2)] and every four years thereafter, the Commission shall evaluate the impact and effectiveness of codes of conduct to foster the application of the requirements set out in Title III, Chapter 2 and possibly other additional requirements for AI systems other than high-risk AI systems.
For the purpose of paragraphs 1 to 4 the Board, the Member States and national competent authorities shall provide the Commission with information on its request.
In carrying out the evaluations and reviews referred to in paragraphs 1 to 4 the Commission shall take into account the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources.
The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into account developments in technology and in the light of the state of progress in the information society.
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union.
This Regulation shall apply from [24 months following the entering into force of the Regulation].
By way of derogation from paragraph 2:
(a) Title III, Chapter 4 and Title VI shall apply from [three months following the entry into force of this Regulation];
(b) Article 71 shall apply from [twelve months following the entry into force of this Regulation].
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at Brussels,
For the European Parliament For the Council
The President The President
referred to in Article 3, point 1
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
Section A — List of Union harmonisation legislation based on the New Legislative Framework
Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repealed by the Machinery Regulation];
Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ L 170, 30.6.2009, p. 1);
Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);
Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251);
Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (OJ L 96, 29.3.2014, p. 309);
Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC (OJ L 153, 22.5.2014, p. 62);
Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, 27.6.2014, p. 164);
Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);
Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51);
Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99);
Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1;
Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Section B. List of other Union harmonisation legislation
Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).
Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52);
Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1);
Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146);
Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system within the European Union (OJ L 138, 26.5.2016, p. 44).
Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1); 3. Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);
Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in points (a) and (b) of Article 2(1) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to control them remotely, are concerned.
High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
(a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
(b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
(a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
(b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
(c) AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
(a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
(b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
(c) AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
(d) AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
(e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
(f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
(g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
(a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
(b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
(c) AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
(d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system:
(a) its intended purpose, the person/s developing the system the date and the version of the system;
(b) how the AI system interacts or can be used to interact with hardware or software that is not part of the AI system itself, where applicable;
(c) the versions of relevant software or firmware and any requirement related to version update;
(d) the description of all forms in which the AI system is placed on the market or put into service;
(e) the description of hardware on which the AI system is intended to run;
(f) where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those products;
(g) instructions of use for the user and, where applicable installation instructions;
(a) the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;
(b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;
(c) the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;
(d) where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including information about the provenance of those data sets, their scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);
(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the users, in accordance with Articles 13(3)(d);
(f) where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Title III, Chapter 2;
(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness, cybersecurity and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred to under point (f).
Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the users; specifications on input data, as appropriate;
A detailed description of the risk management system in accordance with Article 9;
A description of any change made to the system through its lifecycle;
A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of other relevant standards and technical specifications applied;
A copy of the EU declaration of conformity;
A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61, including the post-market monitoring plan referred to in Article 61(3).
The EU declaration of conformity referred to in Article 48, shall contain all of the following information:
AI system name and type and any additional unambiguous reference allowing identification and traceability of the AI system;
Name and address of the provider or, where applicable, their authorised representative;
A statement that the EU declaration of conformity is issued under the sole responsibility of the provider;
A statement that the AI system in question is in conformity with this Regulation and, if applicable, with any other relevant Union legislation that provides for the issuing of an EU declaration of conformity;
References to any relevant harmonised standards used or any other common specification in relation to which conformity is declared;
Where applicable, the name and identification number of the notified body, a description of the conformity assessment procedure performed and identification of the certificate issued;
Place and date of issue of the declaration, name and function of the person who signed it as well as an indication for, and on behalf of whom, that person signed, signature.
The conformity assessment procedure based on internal control is the conformity assessment procedure based on points 2 to 4.
The provider verifies that the established quality management system is in compliance with the requirements of Article 17.
The provider examines the information contained in the technical documentation in order to assess the compliance of the AI system with the relevant essential requirements set out in Title III, Chapter 2.
The provider also verifies that the design and development process of the AI system and its post-market monitoring as referred to in Article 61 is consistent with the technical documentation.
Conformity based on assessment of quality management system and assessment of the technical documentation is the conformity assessment procedure based on points 2 to 5.
The approved quality management system for the design, development and testing of AI systems pursuant to Article 17 shall be examined in accordance with point 3 and shall be subject to surveillance as specified in point 5. The technical documentation of the AI system shall be examined in accordance with point 4.
3.1. The application of the provider shall include:
(a) the name and address of the provider and, if the application is lodged by the authorised representative, their name and address as well;
(b) the list of AI systems covered under the same quality management system;
(c) the technical documentation for each AI system covered under the same quality management system;
(d) the documentation concerning the quality management system which shall cover all the aspects listed under Article 17;
(e) a description of the procedures in place to ensure that the quality management system remains adequate and effective;
(f) a written declaration that the same application has not been lodged with any other notified body.
3.2. The quality management system shall be assessed by the notified body, which shall determine whether it satisfies the requirements referred to in Article 17.
The decision shall be notified to the provider or its authorised representative.
The notification shall contain the conclusions of the assessment of the quality management system and the reasoned assessment decision.
3.3. The quality management system as approved shall continue to be implemented and maintained by the provider so that it remains adequate and efficient.
3.4. Any intended change to the approved quality management system or the list of AI systems covered by the latter shall be brought to the attention of the notified body by the provider.
The proposed changes shall be examined by the notified body, which shall decide whether the modified quality management system continues to satisfy the requirements referred to in point 3.2 or whether a reassessment is necessary.
The notified body shall notify the provider of its decision. The notification shall contain the conclusions of the examination of the changes and the reasoned assessment decision.
4.1. In addition to the application referred to in point 3, an application with a notified body of their choice shall be lodged by the provider for the assessment of the technical documentation relating to the AI system which the provider intends to place on the market or put into service and which is covered by the quality management system referred to under point 3.
4.2. The application shall include:
(a) the name and address of the provider;
(b) a written declaration that the same application has not been lodged with any other notified body;
(c) the technical documentation referred to in Annex IV.
4.3. The technical documentation shall be examined by the notified body. To this purpose, the notified body shall be granted full access to the training and testing datasets used by the provider, including through application programming interfaces (API) or other appropriate means and tools enabling remote access.
4.4. In examining the technical documentation, the notified body may require that the provider supplies further evidence or carries out further tests so as to enable a proper assessment of conformity of the AI system with the requirements set out in Title III, Chapter 2. Whenever the notified body is not satisfied with the tests carried out by the provider, the notified body shall directly carry out adequate tests, as appropriate.
4.5. Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Title III, Chapter 2 and upon a reasoned request, the notified body shall also be granted access to the source code of the AI system.
4.6. The decision shall be notified to the provider or its authorised representative. The notification shall contain the conclusions of the assessment of the technical documentation and the reasoned assessment decision.
Where the AI system is in conformity with the requirements set out in Title III, Chapter 2, an EU technical documentation assessment certificate shall be issued by the notified body. The certificate shall indicate the name and address of the provider, the conclusions of the examination, the conditions (if any) for its validity and the data necessary for the identification of the AI system.
The certificate and its annexes shall contain all relevant information to allow the conformity of the AI system to be evaluated, and to allow for control of the AI system while in use, where applicable.
Where the AI system is not in conformity with the requirements set out in Title III, Chapter 2, the notified body shall refuse to issue an EU technical documentation assessment certificate and shall inform the applicant accordingly, giving detailed reasons for its refusal.
Where the AI system does not meet the requirement relating to the data used to train it, re-training of the AI system will be needed prior to the application for a new conformity assessment. In this case, the reasoned assessment decision of the notified body refusing to issue the EU technical documentation assessment certificate shall contain specific considerations on the quality data used to train the AI system, notably on the reasons for non-compliance.
4.7. Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended purpose shall be approved by the notified body which issued the EU technical documentation assessment certificate. The provider shall inform such notified body of its intention to introduce any of the above-mentioned changes or if it becomes otherwise aware of the occurrence of such changes. The intended changes shall be assessed by the notified body which shall decide whether those changes require a new conformity assessment in accordance with Article 43(4) or whether they could be addressed by means of a supplement to the EU technical documentation assessment certificate. In the latter case, the notified body shall assess the changes, notify the provider of its decision and, where the changes are approved, issue to the provider a supplement to the EU technical documentation assessment certificate.
5.1. The purpose of the surveillance carried out by the notified body referred to in Point 3 is to make sure that the provider duly fulfils the terms and conditions of the approved quality management system.
5.2. For assessment purposes, the provider shall allow the notified body to access the premises where the design, development, testing of the AI systems is taking place. The provider shall further share with the notified body all necessary information.
5.3. The notified body shall carry out periodic audits to make sure that the provider maintains and applies the quality management system and shall provide the provider with an audit report. In the context of those audits, the notified body may carry out additional tests of the AI systems for which an EU technical documentation assessment certificate was issued.
The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be registered in accordance with Article 51.
Name, address and contact details of the provider;
Where submission of information is carried out by another person on behalf of the provider, the name, address and contact details of that person;
Name, address and contact details of the authorised representative, where applicable;
AI system trade name and any additional unambiguous reference allowing identification and traceability of the AI system;
Description of the intended purpose of the AI system;
Status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);
Type, number and expiry date of the certificate issued by the notified body and the name or identification number of that notified body, when applicable;
A scanned copy of the certificate referred to in point 7, when applicable;
Member States in which the AI system is or has been placed on the market, put into service or made available in the Union;
A copy of the EU declaration of conformity referred to in Article 48;
Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law enforcement and migration, asylum and border control management referred to in Annex III, points 1, 6 and 7.
URL for additional information (optional).
(a) Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the use of the Schengen Information System for the return of illegally staying third-country nationals (OJ L 312, 7.12.2018, p. 1).
(b) Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of border checks, and amending the Convention implementing the Schengen Agreement, and amending and repealing Regulation (EC) No 1987/2006 (OJ L 312, 7.12.2018, p. 14)
(c) Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/JHA, and repealing Regulation (EC) No 1986/2006 of the European Parliament and of the Council and Commission Decision 2010/261/EU (OJ L 312, 7.12.2018, p. 56).
(a) Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, 19.9.2018, p. 1).
(b) Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12 September 2018 amending Regulation (EU) 2016/794 for the purpose of establishing a European Travel Information and Authorisation System (ETIAS) (OJ L 236, 19.9.2018, p. 72).
(a) Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa (OJ L 135, 22.5.2019, p. 27).
(b) Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration (OJ L 135, 22.5.2019, p. 85).
REGULATION (EU) 2022/1925 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 14 September 2022
on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (Text with EEA relevance)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION, Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof, Having regard to the proposal from the European Commission, After transmission of the draft legislative act to the national parliaments, Having regard to the opinion of the European Economic and Social Committee [^1], Having regard to the opinion of the Committee of the Regions [^2], Acting in accordance with the ordinary legislative procedure [^3], Whereas:
Digital services in general and online platforms in particular play an increasingly important role in the economy, in particular in the internal market, by enabling businesses to reach users throughout the Union, by facilitating cross-border trade and by opening entirely new business opportunities to a large number of companies in the Union to the benefit of consumers in the Union.
At the same time, among those digital services, core platform services feature a number of characteristics that can be exploited by the undertakings providing them. An example of such characteristics of core platform services is extreme scale economies, which often result from nearly zero marginal costs to add business users or end users. Other such characteristics of core platform services are very strong network effects, an ability to connect many business users with many end users through the multisidedness of these services, a significant degree of dependence of both business users and end users, lock-in effects, a lack of multi-homing for the same purpose by end users, vertical integration, and data driven-advantages. All these characteristics, combined with unfair practices by undertakings providing the core platform services, can have the effect of substantially undermining the contestability of the core platform services, as well as impacting the fairness of the commercial relationship between undertakings providing such services and their business users and end users. In practice, this leads to rapid and potentially far-reaching decreases in business users’ and end users’ choice, and therefore can confer on the provider of those services the position of a so-called gatekeeper. At the same time, it should be recognised that services which act in a non-commercial purpose capacity such as collaborative projects should not be considered as core platform services for the purpose of this Regulation.
A small number of large undertakings providing core platform services have emerged with considerable economic power that could qualify them to be designated as gatekeepers pursuant to this Regulation. Typically, they feature an ability to connect many business users with many end users through their services, which, in turn, enables them to leverage their advantages, such as their access to large amounts of data, from one area of activity to another. Some of those undertakings exercise control over whole platform ecosystems in the digital economy and are structurally extremely difficult to challenge or contest by existing or new market operators, irrespective of how innovative and efficient those market operators may be. Contestability is reduced in particular due to the existence of very high barriers to entry or exit, including high investment costs, which cannot, or not easily, be recuperated in case of exit, and the absence of, or reduced access to, some key inputs in the digital economy, such as data. As a result, the likelihood increases that the underlying markets do not function well, or will soon fail to function well.
The combination of those features of gatekeeper is likely to lead, in many cases, to serious imbalances in bargaining power and, consequently, to unfair practices and conditions for business users, as well as for end users of core platform services provided by gatekeepers, to the detriment of prices, quality, fair competition, choice and innovation in the digital sector.
It follows that the market processes are often incapable of ensuring fair economic outcomes with regard to core platform services. Although Articles 101 and 102 of the Treaty on the Functioning of the European Union (TFEU) apply to the conduct of gatekeepers, the scope of those provisions is limited to certain instances of market power, for example dominance on specific markets and of anti-competitive behaviour, and enforcement occurs ex post and requires an extensive investigation of often very complex facts on a case by case basis. Moreover, existing Union law does not address, or does not address effectively, the challenges to the effective functioning of the internal market posed by the conduct of gatekeepers that are not necessarily dominant in competition-law terms.
Gatekeepers have a significant impact on the internal market, providing gateways for a large number of business users to reach end users everywhere in the Union and on different markets. The adverse impact of unfair practices on the internal market and the particularly weak contestability of core platform services, including the negative societal and economic implications of such unfair practices, have led national legislators and sectoral regulators to act. A number of regulatory solutions have already been adopted at national level or proposed to address unfair practices and the contestability of digital services or at least with regard to some of them. This has created divergent regulatory solutions which results in the fragmentation of the internal market, thus raising the risk of increased compliance costs due to different sets of national regulatory requirements.
Therefore, the purpose of this Regulation is to contribute to the proper functioning of the internal market by laying down rules to ensure contestability and fairness for the markets in the digital sector in general, and for business users and end users of core platform services provided by gatekeepers in particular. Business users and end users of core platform services provided by gatekeepers should be afforded appropriate regulatory safeguards throughout the Union against the unfair practices of gatekeepers, in order to facilitate cross-border business within the Union and thereby improve the proper functioning of the internal market, and to eliminate existing or likely emerging fragmentation in the specific areas covered by this Regulation. Moreover, while gatekeepers tend to adopt global or at least pan-European business models and algorithmic structures, they can adopt, and in some cases have adopted, different business conditions and practices in different Member States, which is liable to create disparities between the competitive conditions for the users of core platform services provided by gatekeepers, to the detriment of integration of the internal market.
By approximating diverging national laws, it is possible to eliminate obstacles to the freedom to provide and receive services, including retail services, within the internal market. A targeted set of harmonised legal obligations should therefore be established at Union level to ensure contestable and fair digital markets featuring the presence of gatekeepers within the internal market to the benefit of the Union’s economy as a whole and ultimately of the Union’s consumers.
Fragmentation of the internal market can only effectively be averted if Member States are prevented from applying national rules which are within the scope of and pursue the same objectives as this Regulation. That does not preclude the possibility of applying to gatekeepers within the meaning of this Regulation other national rules which pursue other legitimate public interest objectives as set out in the TFEU or which pursue overriding reasons of public interest as recognised by the case law of the Court of Justice of the European Union (‘the Court of Justice’).
At the same time, since this Regulation aims to complement the enforcement of competition law, it should apply without prejudice to Articles 101 and 102 TFEU, to the corresponding national competition rules and to other national competition rules regarding unilateral conduct that are based on an individualised assessment of market positions and behaviour, including its actual or potential effects and the precise scope of the prohibited behaviour, and which provide for the possibility of undertakings to make efficiency and objective justification arguments for the behaviour in question, and to national rules concerning merger control. However, the application of those rules should not affect the obligations imposed on gatekeepers under this Regulation and their uniform and effective application in the internal market.
Articles 101 and 102 TFEU and the corresponding national competition rules concerning anticompetitive multilateral and unilateral conduct as well as merger control have as their objective the protection of undistorted competition on the market. This Regulation pursues an objective that is complementary to, but different from that of protecting undistorted competition on any given market, as defined in competition-law terms, which is to ensure that markets where gatekeepers are present are and remain contestable and fair, independently from the actual, potential or presumed effects of the conduct of a given gatekeeper covered by this Regulation on competition on a given market. This Regulation therefore aims to protect a different legal interest from that protected by those rules and it should apply without prejudice to their application.
This Regulation should also apply without prejudice to the rules resulting from other acts of Union law regulating certain aspects of the provision of services covered by this Regulation, in particular Regulations (EU) 2016/679 [^4] and (EU) 2019/1150 [^5] of the European Parliament and of the Council and a Regulation on a single market for digital services, and Directives 2002/58/EC [^6], 2005/29/EC [^7], 2010/13/EU [^8], (EU) 2015/2366 [^9], (EU) 2019/790 [^10] and (EU) 2019/882 [^11] of the European Parliament and of the Council, and Council Directive 93/13/EEC [^12], as well as national rules aimed at enforcing or implementing those Union legal acts.
Weak contestability and unfair practices in the digital sector are more frequent and pronounced for certain digital services than for others. This is the case in particular for widespread and commonly used digital services that mostly directly intermediate between business users and end users and where features such as extreme scale economies, very strong network effects, an ability to connect many business users with many end users through the multisidedness of these services, lock-in effects, a lack of multi-homing or vertical integration are the most prevalent. Often, there is only one or very few large undertakings providing those digital services. Those undertakings have emerged most frequently as gatekeepers for business users and end users, with far-reaching impacts. In particular, they have gained the ability to easily set commercial conditions and terms in a unilateral and detrimental manner for their business users and end users. Accordingly, it is necessary to focus only on those digital services that are most broadly used by business users and end users and where concerns about weak contestability and unfair practices by gatekeepers are more apparent and pressing from an internal market perspective.
In particular, online intermediation services, online search engines, operating systems, online social networking, video sharing platform services, number-independent interpersonal communication services, cloud computing services, virtual assistants, web browsers and online advertising services, including advertising intermediation services, all have the capacity to affect a large number of end users and businesses, which entails a risk of unfair business practices. Therefore, they should be included in the definition of core platform services and fall into the scope of this Regulation. Online intermediation services can also be active in the field of financial services, and they can intermediate or be used to provide such services as listed non-exhaustively in Annex II to Directive (EU) 2015/1535 of the European Parliament and of the Council [^13]. For the purposes of this Regulation, the definition of core platform services should be technology neutral and should be understood to encompass those provided on or through various means or devices, such as connected TV or embedded digital services in vehicles. In certain circumstances, the notion of end users should encompass users that are traditionally considered business users, but in a given situation do not use the core platform services to provide goods or services to other end users, such as for example businesses relying on cloud computing services for their own purposes.
The fact that a digital service qualifies as a core platform service does not in itself give rise to sufficiently serious concerns of contestability or unfair practices. It is only when a core platform service constitutes an important gateway and is operated by an undertaking with a significant impact in the internal market and an entrenched and durable position, or by an undertaking that will foreseeably enjoy such a position in the near future, that such concerns arise. Accordingly, the targeted set of harmonised rules in this Regulation should apply only to undertakings designated on the basis of those three objective criteria, and they should only apply to those of their core platform services that individually constitute an important gateway for business users to reach end users. The fact that it is possible that an undertaking providing core platform services not only intermediates between business users and end users, but also between end users and end users, for example in the case of number-independent interpersonal communications services, should not preclude the conclusion that such an undertaking is or could be an important gateway for business users to reach end users.
In order to ensure the effective application of this Regulation to undertakings providing core platform services which are most likely to satisfy those objective requirements, and where unfair practices weakening contestability are most prevalent and have the most impact, the Commission should be able to directly designate as gatekeepers those undertakings providing core platform services which meet certain quantitative thresholds. Such undertakings should in any event be subject to a fast designation process which should start once this Regulation becomes applicable.
The fact that an undertaking has a very significant turnover in the Union and provides a core platform service in at least three Member States constitutes compelling indication that that undertaking has a significant impact on the internal market. This is equally true where an undertaking providing a core platform service in at least three Member States has a very significant market capitalisation or equivalent fair market value. Therefore, an undertaking providing a core platform service should be presumed to have a significant impact on the internal market where it provides a core platform service in at least three Member States and where either its group turnover realised in the Union is equal to or exceeds a specific, high threshold, or the market capitalisation of the group is equal to or exceeds a certain high absolute value. For undertakings providing core platform services that belong to undertakings that are not publicly listed, the equivalent fair market value should be used as the reference. It should be possible for the Commission to use its power to adopt delegated acts to develop an objective methodology to calculate that value. A high group turnover realised in the Union in conjunction with the threshold number of users in the Union of core platform services reflects a relatively strong ability to monetise those users. A high market capitalisation relative to the same threshold number of users in the Union reflects a relatively significant potential to monetise those users in the near future. This monetisation potential in turn reflects, in principle, the gateway position of the undertakings concerned. Both indicators, in addition, reflect the financial capacity of the undertakings concerned, including their ability to leverage their access to financial markets to reinforce their position. This can, for example, happen where this superior access is used to acquire other undertakings, an ability which has in turn been shown to have potential negative effects on innovation. Market capitalisation can also reflect the expected future position and effect on the internal market of the undertakings concerned, despite a potentially relatively low current turnover. The market capitalisation value should be based on a level that reflects the average market capitalisation of the largest publicly listed undertakings in the Union over an appropriate period.
Whereas a market capitalisation at or above the threshold in the last financial year should give rise to a presumption that an undertaking providing core platform services has a significant impact on the internal market, a sustained market capitalisation of the undertaking providing core platform services at or above the threshold over three or more years should be considered as further strengthening that presumption.
By contrast, there could be a number of factors concerning market capitalisation that would require an in-depth assessment in determining whether an undertaking providing core platform services should be deemed to have a significant impact on the internal market. This could be the case where the market capitalisation of the undertaking providing core platform services in preceding financial years was significantly lower than the threshold and the volatility of its market capitalisation over the observed period was disproportionate to overall equity market volatility or its market capitalisation trajectory relative to market trends was inconsistent with a rapid and unidirectional growth.
Having a very high number of business users that depend on a core platform service to reach a very high number of monthly active end users enables the undertaking providing that service to influence the operations of a substantial part of business users to its advantage and indicate, in principle, that that undertaking is an important gateway. The respective relevant levels for those numbers should be set representing a substantive percentage of the entire population of the Union when it comes to end users and of the entire population of businesses using core platform services to determine the threshold for business users. Active end users and business users should be identified and calculated in such a way as to adequately represent the role and reach of the specific core platform service in question. In order to provide legal certainty for gatekeepers, the elements to determine the number of active end users and business users per core platform service should be set out in an Annex to this Regulation. Such elements can be affected by technological and other developments. The Commission should therefore be empowered to adopt delegated acts to amend this Regulation by updating the methodology and the list of indicators used to determine the number of active end users and active business users.
An entrenched and durable position in its operations or the foreseeability of enjoying such a position in the future occurs notably where the contestability of the position of the undertaking providing the core platform service is limited. This is likely to be the case where that undertaking has provided a core platform service in at least three Member States to a very high number of business users and end users over a period of at least 3 years.
Such thresholds can be affected by market and technical developments. The Commission should therefore be empowered to adopt delegated acts to specify the methodology for determining whether the quantitative thresholds are met, and to regularly adjust it to market and technological developments where necessary. Such delegated acts should not amend the quantitative thresholds set out in this Regulation.
An undertaking providing core platform services should be able, in exceptional circumstances, to rebut the presumption that the undertaking has a significant impact on the internal market by demonstrating that, although it meets the quantitative thresholds set out in this Regulation, it does not fulfil the requirements for designation as a gatekeeper. The burden of adducing evidence that the presumption deriving from the fulfilment of the quantitative thresholds should not apply should be borne by that undertaking. In its assessment of the evidence and arguments produced, the Commission should take into account only those elements which directly relate to the quantitative criteria, namely the impact of the undertaking providing core platform services on the internal market beyond revenue or market cap, such as its size in absolute terms, and the number of Member States in which it is present; by how much the actual business user and end user numbers exceed the thresholds and the importance of the undertaking’s core platform service considering the overall scale of activities of the respective core platform service; and the number of years for which the thresholds have been met. Any justification on economic grounds seeking to enter into market definition or to demonstrate efficiencies deriving from a specific type of behaviour by the undertaking providing core platform services should be discarded, as it is not relevant to the designation as a gatekeeper. If the arguments submitted are not sufficiently substantiated because they do not manifestly put into question the presumption, it should be possible for the Commission to reject the arguments within the timeframe of 45 working days provided for the designation. The Commission should be able to take a decision by relying on information available on the quantitative thresholds where the undertaking providing core platform services obstructs the investigation by failing to comply with the investigative measures taken by the Commission.
Provision should also be made for the assessment of the gatekeeper role of undertakings providing core platform services which do not satisfy all of the quantitative thresholds, in light of the overall objective requirements that they have a significant impact on the internal market, act as an important gateway for business users to reach end users and benefit from an entrenched and durable position in their operations or it is foreseeable that they will do so in the near future. When the undertaking providing core platform services is a medium-sized, small or micro enterprise, the assessment should carefully take into account whether such an undertaking would be able to substantially undermine the contestability of the core platform services, since this Regulation primarily targets large undertakings with considerable economic power rather than medium-sized, small or micro enterprises.
Such an assessment can only be done in light of a market investigation, while taking into account the quantitative thresholds. In its assessment the Commission should pursue the objectives of preserving and fostering innovation and the quality of digital products and services, the degree to which prices are fair and competitive, and the degree to which quality or choice for business users and for end users is or remains high. Elements can be taken into account that are specific to the undertakings providing core platform services concerned, such as extreme scale or scope economies, very strong network effects, data-driven advantages, an ability to connect many business users with many end users through the multisidedness of those services, lock-in effects, lack of multi-homing, conglomerate corporate structure or vertical integration. In addition, a very high market capitalisation, a very high ratio of equity value over profit or a very high turnover derived from end users of a single core platform service can be used as indicators of the leveraging potential of such undertakings and of the tipping of the market in their favour. Together with market capitalisation, high relative growth rates are examples of dynamic parameters that are particularly relevant to identifying such undertakings providing core platform services for which it is foreseeable that they will become entrenched and durable. The Commission should be able to take a decision by drawing adverse inferences from facts available where the undertaking providing core platform services significantly obstructs the investigation by failing to comply with the investigative measures taken by the Commission.
A particular subset of rules should apply to those undertakings providing core platform services for which it is foreseeable that they will enjoy an entrenched and durable position in the near future. The same specific features of core platform services make them prone to tipping: once an undertaking providing the core platform service has obtained a certain advantage over rivals or potential challengers in terms of scale or intermediation power, its position could become unassailable and the situation could evolve to the point that it is likely to become entrenched and durable in the near future. Undertakings can try to induce this tipping and emerge as gatekeeper by using some of the unfair conditions and practices regulated under this Regulation. In such a situation, it appears appropriate to intervene before the market tips irreversibly.
However, such early intervention should be limited to imposing only those obligations that are necessary and appropriate to ensure that the services in question remain contestable and enable the qualified risk of unfair conditions and practices to be avoided. Obligations that prevent the undertaking providing core platform services concerned from enjoying an entrenched and durable position in its operations, such as those preventing leveraging, and those that facilitate switching and multi-homing are more directly geared towards this purpose. To ensure proportionality, the Commission should moreover apply from that subset of obligations only those that are necessary and proportionate to achieve the objectives of this Regulation and should regularly review whether such obligations should be maintained, suppressed or adapted.
Applying only those obligations that are necessary and proportionate to achieve the objectives of this Regulation should allow the Commission to intervene in time and effectively, while fully respecting the proportionality of the measures considered. It should also reassure actual or potential market participants about the contestability and fairness of the services concerned.
Gatekeepers should comply with the obligations laid down in this Regulation in respect of each of the core platform services listed in the relevant designation decision. The obligations should apply taking into account the conglomerate position of gatekeepers, where applicable. Furthermore, it should be possible for the Commission to impose implementing measures on the gatekeeper by decision. Those implementing measures should be designed in an effective manner, having regard to the features of core platform services and the possible circumvention risks, and in compliance with the principle of proportionality and the fundamental rights of the undertakings concerned, as well as those of third parties.
The very rapidly changing and complex technological nature of core platform services requires a regular review of the status of gatekeepers, including those that it is foreseen will enjoy an entrenched and durable position in their operations in the near future. To provide all of the market participants, including the gatekeepers, with the required certainty as to the applicable legal obligations, a time limit for such regular reviews is necessary. It is also important to conduct such reviews on a regular basis and at least every 3 years. Furthermore, it is important to clarify that not every change in the facts on the basis of which an undertaking providing core platform services was designated as a gatekeeper should require amendment of the designation decision. Amendment will only be necessary if the change in the facts also leads to a change in the assessment. Whether or not that is the case should be based on a case-by-case assessment of the facts and circumstances.
To safeguard the contestability and fairness of core platform services provided by gatekeepers, it is necessary to provide in a clear and unambiguous manner for a set of harmonised rules with regard to those services. Such rules are needed to address the risk of harmful effects of practices by gatekeepers, to the benefit of the business environment in the services concerned, of users and ultimately of society as a whole. The obligations correspond to those practices that are considered as undermining contestability or as being unfair, or both, when taking into account the features of the digital sector and which have a particularly negative direct impact on business users and end users. It should be possible for the obligations laid down by this Regulation to specifically take into account the nature of the core platform services provided. The obligations in this Regulation should not only ensure contestability and fairness with respect to core platform services listed in the designation decision, but also with respect to other digital products and services into which gatekeepers leverage their gateway position, which are often provided together with, or in support of, the core platform services.
For the purpose of this Regulation, contestability should relate to the ability of undertakings to effectively overcome barriers to entry and expansion and challenge the gatekeeper on the merits of their products and services. The features of core platform services in the digital sector, such as network effects, strong economies of scale, and benefits from data have limited the contestability of those services and the related ecosystems. Such a weak contestability reduces the incentives to innovate and improve products and services for the gatekeeper, its business users, its challengers and customers and thus negatively affects the innovation potential of the wider online platform economy. Contestability of the services in the digital sector can also be limited if there is more than one gatekeeper for a core platform service. This Regulation should therefore ban certain practices by gatekeepers that are liable to increase barriers to entry or expansion, and impose certain obligations on gatekeepers that tend to lower those barriers. The obligations should also address situations where the position of the gatekeeper may be entrenched to such an extent that inter-platform competition is not effective in the short term, meaning that intra-platform competition needs to be created or increased.
For the purpose of this Regulation, unfairness should relate to an imbalance between the rights and obligations of business users where the gatekeeper obtains a disproportionate advantage. Market participants, including business users of core platform services and alternative providers of services provided together with, or in support of, such core platform services, should have the ability to adequately capture the benefits resulting from their innovative or other efforts. Due to their gateway position and superior bargaining power, it is possible that gatekeepers engage in behaviour that does not allow others to capture fully the benefits of their own contributions, and unilaterally set unbalanced conditions for the use of their core platform services or services provided together with, or in support of, their core platform services. Such imbalance is not excluded by the fact that the gatekeeper offers a particular service free of charge to a specific group of users, and may also consist in excluding or discriminating against business users, in particular if the latter compete with the services provided by the gatekeeper. This Regulation should therefore impose obligations on gatekeepers addressing such behaviour.
Contestability and fairness are intertwined. The lack of, or weak, contestability for a certain service can enable a gatekeeper to engage in unfair practices. Similarly, unfair practices by a gatekeeper can reduce the possibility of business users or others to contest the gatekeeper’s position. A particular obligation in this Regulation may, therefore, address both elements.
The obligations laid down in this Regulation are therefore necessary to address identified public policy concerns, there being no alternative and less restrictive measures that would effectively achieve the same result, having regard to the need to safeguard public order, protect privacy and fight fraudulent and deceptive commercial practices.
Gatekeepers often directly collect personal data of end users for the purpose of providing online advertising services when end users use third-party websites and software applications. Third parties also provide gatekeepers with personal data of their end users in order to make use of certain services provided by the gatekeepers in the context of their core platform services, such as custom audiences. The processing, for the purpose of providing online advertising services, of personal data from third parties using core platform services gives gatekeepers potential advantages in terms of accumulation of data, thereby raising barriers to entry. This is because gatekeepers process personal data from a significantly larger number of third parties than other undertakings. Similar advantages result from the conduct of (i) combining end user personal data collected from a core platform service with data collected from other services; (ii) cross-using personal data from a core platform service in other services provided separately by the gatekeeper, notably services which are not provided together with, or in support of, the relevant core platform service, and vice versa; or (iii) signing-in end users to different services of gatekeepers in order to combine personal data. To ensure that gatekeepers do not unfairly undermine the contestability of core platform services, gatekeepers should enable end users to freely choose to opt-in to such data processing and sign-in practices by offering a less personalised but equivalent alternative, and without making the use of the core platform service or certain functionalities thereof conditional upon the end user’s consent. This should be without prejudice to the gatekeeper processing personal data or signing in end users to a service, relying on the legal basis under Article 6(1), points (c), (d) and (e), of Regulation (EU) 2016/679, but not on Article 6(1), points (b) and (f) of that Regulation.
The less personalised alternative should not be different or of degraded quality compared to the service provided to the end users who provide consent, unless a degradation of quality is a direct consequence of the gatekeeper not being able to process such personal data or signing in end users to a service. Not giving consent should not be more difficult than giving consent. When the gatekeeper requests consent, it should proactively present a user-friendly solution to the end user to provide, modify or withdraw consent in an explicit, clear and straightforward manner. In particular, consent should be given by a clear affirmative action or statement establishing a freely given, specific, informed and unambiguous indication of agreement by the end user, as defined in Regulation (EU) 2016/679. At the time of giving consent, and only where applicable, the end user should be informed that not giving consent can lead to a less personalised offer, but that otherwise the core platform service will remain unchanged and that no functionalities will be suppressed. Exceptionally, if consent cannot be given directly to the gatekeeper’s core platform service, end users should be able to give consent through each third-party service that makes use of that core platform service, to allow the gatekeeper to process personal data for the purposes of providing online advertising services. Lastly, it should be as easy to withdraw consent as to give it. Gatekeepers should not design, organise or operate their online interfaces in a way that deceives, manipulates or otherwise materially distorts or impairs the ability of end users to freely give consent. In particular, gatekeepers should not be allowed to prompt end users more than once a year to give consent for the same processing purpose in respect of which they initially did not give consent or withdrew their consent. This Regulation is without prejudice to Regulation (EU) 2016/679, including its enforcement framework, which remains fully applicable with respect to any claims by data subjects relating to an infringement of their rights under that Regulation.
Children merit specific protection with regard to their personal data, in particular as regards the use of their personal data for the purposes of commercial communication or creating user profiles. The protection of children online is an important objective of the Union and should be reflected in the relevant Union law. In this context, due regard should be given to a Regulation on a single market for digital services. Nothing in this Regulation exempts gatekeepers from the obligation to protect children laid down in applicable Union law.
In certain cases, for instance through the imposition of contractual terms and conditions, gatekeepers can restrict the ability of business users of their online intermediation services to offer products or services to end users under more favourable conditions, including price, through other online intermediation services or through direct online sales channels. Where such restrictions relate to third-party online intermediation services, they limit inter-platform contestability, which in turn limits choice of alternative online intermediation services for end users. Where such restrictions relate to direct online sales channels, they unfairly limit the freedom of business users to use such channels. To ensure that business users of online intermediation services of gatekeepers can freely choose alternative online intermediation services or direct online sales channels and differentiate the conditions under which they offer their products or services to end users, it should not be accepted that gatekeepers limit business users from choosing to differentiate commercial conditions, including price. Such a restriction should apply to any measure with equivalent effect, such as increased commission rates or de-listing of the offers of business users.
To prevent further reinforcing their dependence on the core platform services of gatekeepers, and in order to promote multi-homing, the business users of those gatekeepers should be free to promote and choose the distribution channel that they consider most appropriate for the purpose of interacting with any end users that those business users have already acquired through core platform services provided by the gatekeeper or through other channels. This should apply to the promotion of offers, including through a software application of the business user, and any form of communication and conclusion of contracts between business users and end users. An acquired end user is an end user who has already entered into a commercial relationship with the business user and, where applicable, the gatekeeper has been directly or indirectly remunerated by the business user for facilitating the initial acquisition of the end user by the business user. Such commercial relationships can be on either a paid or a free basis, such as free trials or free service tiers, and can have been entered into either on the core platform service of the gatekeeper or through any other channel. Conversely, end users should also be free to choose offers of such business users and to enter into contracts with them either through core platform services of the gatekeeper, if applicable, or from a direct distribution channel of the business user or another indirect channel that such business user uses.
The ability of end users to acquire content, subscriptions, features or other items outside the core platform services of the gatekeeper should not be undermined or restricted. In particular, a situation should be avoided whereby gatekeepers restrict end users from access to, and use of, such services via a software application running on their core platform service. For example, subscribers to online content purchased outside a software application, software application store or virtual assistant should not be prevented from accessing such online content on a software application on the core platform service of the gatekeeper simply because it was purchased outside such software application, software application store or virtual assistant.
To safeguard a fair commercial environment and protect the contestability of the digital sector it is important to safeguard the right of business users and end users, including whistleblowers, to raise concerns about unfair practices by gatekeepers raising any issue of non-compliance with the relevant Union or national law with any relevant administrative or other public authorities, including national courts. For example, it is possible that business users or end users will want to complain about different types of unfair practices, such as discriminatory access conditions, unjustified closing of business user accounts or unclear grounds for product de-listings. Any practice that would in any way inhibit or hinder those users in raising their concerns or in seeking available redress, for instance by means of confidentiality clauses in agreements or other written terms, should therefore be prohibited. This prohibition should be without prejudice to the right of business users and gatekeepers to lay down in their agreements the terms of use including the use of lawful complaints-handling mechanisms, including any use of alternative dispute resolution mechanisms or of the jurisdiction of specific courts in compliance with respective Union and national law. This should also be without prejudice to the role gatekeepers play in the fight against illegal content online.
Certain services provided together with, or in support of, relevant core platform services of the gatekeeper, such as identification services, web browser engines, payment services or technical services that support the provision of payment services, such as payment systems for in-app purchases, are crucial for business users to conduct their business and allow them to optimise services. In particular, each browser is built on a web browser engine, which is responsible for key browser functionality such as speed, reliability and web compatibility. When gatekeepers operate and impose web browser engines, they are in a position to determine the functionality and standards that will apply not only to their own web browsers, but also to competing web browsers and, in turn, to web software applications. Gatekeepers should therefore not use their position to require their dependent business users to use any of the services provided together with, or in support of, core platform services by the gatekeeper itself as part of the provision of services or products by those business users. In order to avoid a situation in which gatekeepers indirectly impose on business users their own services provided together with, or in support of, core platform services, gatekeepers should also be prohibited from requiring end users to use such services, when that requirement would be imposed in the context of the service provided to end users by the business user using the core platform service of the gatekeeper. That prohibition aims to protect the freedom of the business user to choose alternative services to the ones of the gatekeeper, but should not be construed as obliging the business user to offer such alternatives to its end users.
The conduct of requiring business users or end users to subscribe to, or register with, any other core platform services of gatekeepers listed in the designation decision or which meet the thresholds of active end users and business users set out in this Regulation, as a condition for using, accessing, signing up for or registering with a core platform service gives the gatekeepers a means of capturing and locking-in new business users and end users for their core platform services by ensuring that business users cannot access one core platform service without also at least registering or creating an account for the purposes of receiving a second core platform service. That conduct also gives gatekeepers a potential advantage in terms of accumulation of data. As such, this conduct is liable to raise barriers to entry and should be prohibited.
The conditions under which gatekeepers provide online advertising services to business users, including both advertisers and publishers, are often non-transparent and opaque. This opacity is partly linked to the practices of a few platforms, but is also due to the sheer complexity of modern day programmatic advertising. That sector is considered to have become less transparent after the introduction of new privacy legislation. This often leads to a lack of information and knowledge for advertisers and publishers about the conditions of the online advertising services they purchase and undermines their ability to switch between undertakings providing online advertising services. Furthermore, the costs of online advertising services under these conditions are likely to be higher than they would be in a fairer, more transparent and contestable platform environment. Those higher costs are likely to be reflected in the prices that end users pay for many daily products and services relying on the use of online advertising services. Transparency obligations should therefore require gatekeepers to provide advertisers and publishers to whom they supply online advertising services, when requested, with free of charge information that allows both sides to understand the price paid for each of the different online advertising services provided as part of the relevant advertising value chain. This information should be provided, upon request, to an advertiser at the level of an individual advertisement in relation to the price and fees charged to that advertiser and, subject to an agreement by the publisher owning the inventory where the advertisement is displayed, the remuneration received by that consenting publisher. The provision of this information on a daily basis will allow advertisers to receive information that has a sufficient level of granularity necessary to compare the costs of using the online advertising services of gatekeepers with the costs of using online advertising services of alternative undertakings. Where some publishers do not provide their consent to the sharing of the relevant information with the advertiser, the gatekeeper should provide the advertiser with the information about the daily average remuneration received by those publishers for the relevant advertisements. The same obligation and principles of sharing the relevant information concerning the provision of online advertising services should apply in respect of requests by publishers. Since gatekeepers can use different pricing models for the provision of online advertising services to advertisers and publishers, for instance a price per impression, per view or any other criterion, gatekeepers should also provide the method with which each of the prices and remunerations are calculated.
In certain circumstances, a gatekeeper has a dual role as an undertaking providing core platform services, whereby it provides a core platform service, and possibly other services provided together with, or in support of, that core platform service to its business users, while also competing or intending to compete with those same business users in the provision of the same or similar services or products to the same end users. In those circumstances, a gatekeeper can take advantage of its dual role to use data, generated or provided by its business users in the context of activities by those business users when using the core platform services or the services provided together with, or in support of, those core platform services, for the purpose of its own services or products. The data of the business user can also include any data generated by or provided during the activities of its end users. This can be the case, for instance, where a gatekeeper provides an online marketplace or a software application store to business users, and at the same time provides services as an undertaking providing online retail services or software applications. To prevent gatekeepers from unfairly benefitting from their dual role, it is necessary to ensure that they do not use any aggregated or non-aggregated data, which could include anonymised and personal data that is not publicly available to provide similar services to those of their business users. That obligation should apply to the gatekeeper as a whole, including but not limited to its business unit that competes with the business users of a core platform service.
Business users can also purchase online advertising services from an undertaking providing core platform services for the purpose of providing goods and services to end users. In this case, it can happen that the data are not generated on the core platform service, but are provided to the core platform service by the business user or are generated based on its operations through the core platform service concerned. In certain instances, that core platform service providing advertising can have a dual role as both an undertaking providing online advertising services and an undertaking providing services competing with business users. Accordingly, the obligation prohibiting a dual role gatekeeper from using data of business users should apply also with respect to the data that a core platform service has received from businesses for the purpose of providing online advertising services related to that core platform service.
In relation to cloud computing services, the obligation not to use the data of business users should extend to data provided or generated by business users of the gatekeeper in the context of their use of the cloud computing service of the gatekeeper, or through its software application store that allows end users of cloud computing services access to software applications. That obligation should not affect the right of the gatekeeper to use aggregated data for providing other services provided together with, or in support of, its core platform service, such as data analytics services, subject to compliance with Regulation (EU) 2016/679 and Directive 2002/58/EC, as well as with the relevant obligations in this Regulation concerning such services.
A gatekeeper can use different means to favour its own or third-party services or products on its operating system, virtual assistant or web browser, to the detriment of the same or similar services that end users could obtain through other third parties. This can for instance happen where certain software applications or services are pre-installed by a gatekeeper. To enable end user choice, gatekeepers should not prevent end users from un-installing any software applications on their operating system. It should be possible for the gatekeeper to restrict such un-installation only when such software applications are essential to the functioning of the operating system or the device. Gatekeepers should also allow end users to easily change the default settings on the operating system, virtual assistant and web browser when those default settings favour their own software applications and services. This includes prompting a choice screen, at the moment of the users’ first use of an online search engine, virtual assistant or web browser of the gatekeeper listed in the designation decision, allowing end users to select an alternative default service when the operating system of the gatekeeper directs end users to those online search engine, virtual assistant or web browser and when the virtual assistant or the web browser of the gatekeeper direct the user to the online search engine listed in the designation decision.
The rules that a gatekeeper sets for the distribution of software applications can, in certain circumstances, restrict the ability of end users to install and effectively use third-party software applications or software application stores on hardware or operating systems of that gatekeeper and restrict the ability of end users to access such software applications or software application stores outside the core platform services of that gatekeeper. Such restrictions can limit the ability of developers of software applications to use alternative distribution channels and the ability of end users to choose between different software applications from different distribution channels and should be prohibited as unfair and liable to weaken the contestability of core platform services. To ensure contestability, the gatekeeper should furthermore allow the third-party software applications or software application stores to prompt the end user to decide whether that service should become the default and enable that change to be carried out easily. In order to ensure that third-party software applications or software application stores do not endanger the integrity of the hardware or operating system provided by the gatekeeper, it should be possible for the gatekeeper concerned to implement proportionate technical or contractual measures to achieve that goal if the gatekeeper demonstrates that such measures are necessary and justified and that there are no less-restrictive means to safeguard the integrity of the hardware or operating system. The integrity of the hardware or the operating system should include any design options that need to be implemented and maintained in order for the hardware or the operating system to be protected against unauthorised access, by ensuring that security controls specified for the hardware or the operating system concerned cannot be compromised. Furthermore, in order to ensure that third-party software applications or software application stores do not undermine end users’ security, it should be possible for the gatekeeper to implement strictly necessary and proportionate measures and settings, other than default settings, enabling end users to effectively protect security in relation to third-party software applications or software application stores if the gatekeeper demonstrates that such measures and settings are strictly necessary and justified and that there are no less-restrictive means to achieve that goal. The gatekeeper should be prevented from implementing such measures as a default setting or as pre-installation.
Gatekeepers are often vertically integrated and offer certain products or services to end users through their own core platform services, or through a business user over which they exercise control which frequently leads to conflicts of interest. This can include the situation whereby a gatekeeper provides its own online intermediation services through an online search engine. When offering those products or services on the core platform service, gatekeepers can reserve a better position, in terms of ranking, and related indexing and crawling, for their own offering than that of the products or services of third parties also operating on that core platform service. This can occur for instance with products or services, including other core platform services, which are ranked in the results communicated by online search engines, or which are partly or entirely embedded in online search engines results, groups of results specialised in a certain topic, displayed along with the results of an online search engine, which are considered or used by certain end users as a service distinct or additional to the online search engine. Other instances are those of software applications which are distributed through software application stores, or videos distributed through a video-sharing platform, or products or services that are given prominence and display in the newsfeed of an online social networking service, or products or services ranked in search results or displayed on an online marketplace, or products or services offered through a virtual assistant. Such reserving of a better position of gatekeeper’s own offering can take place even before ranking following a query, such as during crawling and indexing. For example, already during crawling, as a discovery process by which new and updated content is being found, as well as indexing, which entails storing and organising of the content found during the crawling process, the gatekeeper can favour its own content over that of third parties. In those circumstances, the gatekeeper is in a dual-role position as intermediary for third-party undertakings and as undertaking directly providing products or services. Consequently, such gatekeepers have the ability to undermine directly the contestability for those products or services on those core platform services, to the detriment of business users which are not controlled by the gatekeeper.
In such situations, the gatekeeper should not engage in any form of differentiated or preferential treatment in ranking on the core platform service, and related indexing and crawling, whether through legal, commercial or technical means, in favour of products or services it offers itself or through a business user which it controls. To ensure that this obligation is effective, the conditions that apply to such ranking should also be generally fair and transparent. Ranking should in this context cover all forms of relative prominence, including display, rating, linking or voice results and should also include instances where a core platform service presents or communicates only one result to the end user. To ensure that this obligation is effective and cannot be circumvented, it should also apply to any measure that has an equivalent effect to the differentiated or preferential treatment in ranking. The guidelines adopted pursuant to Article 5 of Regulation (EU) 2019/1150 should also facilitate the implementation and enforcement of this obligation.
Gatekeepers should not restrict or prevent the free choice of end users by technically or otherwise preventing switching between or subscription to different software applications and services. This would allow more undertakings to offer their services, thereby ultimately providing greater choice to the end users. Gatekeepers should ensure a free choice irrespective of whether they are the manufacturer of any hardware by means of which such software applications or services are accessed and should not raise artificial technical or other barriers so as to make switching impossible or ineffective. The mere offering of a given product or service to consumers, including by means of pre-installation, as well as the improvement of the offering to end users, such as price reductions or increased quality, should not be construed as constituting a prohibited barrier to switching.
Gatekeepers can hamper the ability of end users to access online content and services, including software applications. Therefore, rules should be established to ensure that the rights of end users to access an open internet are not compromised by the conduct of gatekeepers. Gatekeepers can also technically limit the ability of end users to effectively switch between different undertakings providing internet access service, in particular through their control over hardware or operating systems. This distorts the level playing field for internet access services and ultimately harms end users. It should therefore be ensured that gatekeepers do not unduly restrict end users in choosing the undertaking providing their internet access service.
A gatekeeper can provide services or hardware, such as wearable devices, that access hardware or software features of a device accessed or controlled via an operating system or virtual assistant in order to offer specific functionalities to end users. In that case, competing service or hardware providers, such as providers of wearable devices, require equally effective interoperability with, and access for the purposes of interoperability to, the same hardware or software features to be able to provide a competitive offering to end users.
Gatekeepers can also have a dual role as developers of operating systems and device manufacturers, including any technical functionality that such a device may have. For example, a gatekeeper that is a manufacturer of a device can restrict access to some of the functionalities in that device, such as near-field-communication technology, secure elements and processors, authentication mechanisms and the software used to operate those technologies, which can be required for the effective provision of a service provided together with, or in support of, the core platform service by the gatekeeper as well as by any potential third-party undertaking providing such service.
If dual roles are used in a manner that prevents alternative service and hardware providers from having access under equal conditions to the same operating system, hardware or software features that are available or used by the gatekeeper in the provision of its own complementary or supporting services or hardware, this could significantly undermine innovation by such alternative providers, as well as choice for end users. The gatekeepers should, therefore, be required to ensure, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same operating system, hardware or software features that are available or used in the provision of its own complementary and supporting services and hardware. Such access can equally be required by software applications related to the relevant services provided together with, or in support of, the core platform service in order to effectively develop and provide functionalities interoperable with those provided by gatekeepers. The aim of the obligations is to allow competing third parties to interconnect through interfaces or similar solutions to the respective features as effectively as the gatekeeper’s own services or hardware.
The conditions under which gatekeepers provide online advertising services to business users, including both advertisers and publishers, are often non-transparent and opaque. This often leads to a lack of information for advertisers and publishers about the effect of a given advertisement. To further enhance fairness, transparency and contestability of online advertising services listed in the designation decision, as well as those that are fully integrated with other core platform services of the same undertaking, gatekeepers should provide advertisers and publishers, and third parties authorised by advertisers and publishers, when requested, with free of charge access to the gatekeepers’ performance measuring tools and the data, including aggregated and non-aggregated data, necessary for advertisers, authorised third parties such as advertising agencies acting on behalf of a company placing advertising, as well as for publishers to carry out their own independent verification of the provision of the relevant online advertising services.
Gatekeepers benefit from access to vast amounts of data that they collect while providing the core platform services, as well as other digital services. To ensure that gatekeepers do not undermine the contestability of core platform services, or the innovation potential of the dynamic digital sector, by restricting switching or multi-homing, end users, as well as third parties authorised by an end user, should be granted effective and immediate access to the data they provided or that was generated through their activity on the relevant core platform services of the gatekeeper. The data should be received in a format that can be immediately and effectively accessed and used by the end user or the relevant third party authorised by the end user to which the data is ported. Gatekeepers should also ensure, by means of appropriate and high quality technical measures, such as application programming interfaces, that end users or third parties authorised by end users can freely port the data continuously and in real time. This should apply also to any other data at different levels of aggregation necessary to effectively enable such portability. For the avoidance of doubt, the obligation on the gatekeeper to ensure effective portability of data under this Regulation complements the right to data portability under the Regulation (EU) 2016/679. Facilitating switching or multi-homing should lead, in turn, to an increased choice for end users and acts as an incentive for gatekeepers and business users to innovate.
Business users that use core platform services provided by gatekeepers, and end users of such business users provide and generate a vast amount of data. In order to ensure that business users have access to the relevant data thus generated, the gatekeeper should, upon their request, provide effective access, free of charge, to such data. Such access should also be given to third parties contracted by the business user, who are acting as processors of this data for the business user. The access should include access to data provided or generated by the same business users and the same end users of those business users in the context of other services provided by the same gatekeeper, including services provided together with or in support of core platform services, where this is inextricably linked to the relevant request. To this end, a gatekeeper should not use any contractual or other restrictions to prevent business users from accessing relevant data and should enable business users to obtain consent of their end users for such data access and retrieval, where such consent is required under Regulation (EU) 2016/679 and Directive 2002/58/EC. Gatekeepers should also ensure the continuous and real time access to such data by means of appropriate technical measures, for example by putting in place high quality application programming interfaces or integrated tools for small volume business users.
The value of online search engines to their respective business users and end users increases as the total number of such users increases. Undertakings providing online search engines collect and store aggregated datasets containing information about what users searched for, and how they interacted with, the results with which they were provided. Undertakings providing online search engines collect these data from searches undertaken on their own online search engine and, where applicable, searches undertaken on the platforms of their downstream commercial partners. Access by gatekeepers to such ranking, query, click and view data constitutes an important barrier to entry and expansion, which undermines the contestability of online search engines. Gatekeepers should therefore be required to provide access, on fair, reasonable and non-discriminatory terms, to those ranking, query, click and view data in relation to free and paid search generated by consumers on online search engines to other undertakings providing such services, so that those third-party undertakings can optimise their services and contest the relevant core platform services. Such access should also be given to third parties contracted by a provider of an online search engine, who are acting as processors of this data for that online search engine. When providing access to its search data, a gatekeeper should ensure the protection of the personal data of end users, including against possible re-identification risks, by appropriate means, such as anonymisation of such personal data, without substantially degrading the quality or usefulness of the data. The relevant data is anonymised if personal data is irreversibly altered in such a way that information does not relate to an identified or identifiable natural person or where personal data is rendered anonymous in such a manner that the data subject is not or is no longer identifiable.
For software application stores, online search engines and online social networking services listed in the designation decision, gatekeepers should publish and apply general conditions of access that should be fair, reasonable and non-discriminatory. Those general conditions should provide for a Union based alternative dispute settlement mechanism that is easily accessible, impartial, independent and free of charge for the business user, without prejudice to the business user’s own cost and proportionate measures aimed at preventing the abuse of the dispute settlement mechanism by business users. The dispute settlement mechanism should be without prejudice to the right of business users to seek redress before judicial authorities in accordance with Union and national law. In particular, gatekeepers which provide access to software application stores are an important gateway for business users that seek to reach end users. In view of the imbalance in bargaining power between those gatekeepers and business users of their software application stores, those gatekeepers should not be allowed to impose general conditions, including pricing conditions, that would be unfair or lead to unjustified differentiation. Pricing or other general access conditions should be considered unfair if they lead to an imbalance of rights and obligations imposed on business users or confer an advantage on the gatekeeper which is disproportionate to the service provided by the gatekeeper to business users or lead to a disadvantage for business users in providing the same or similar services as the gatekeeper. The following benchmarks can serve as a yardstick to determine the fairness of general access conditions: prices charged or conditions imposed for the same or similar services by other providers of software application stores; prices charged or conditions imposed by the provider of the software application store for different related or similar services or to different types of end users; prices charged or conditions imposed by the provider of the software application store for the same service in different geographic regions; prices charged or conditions imposed by the provider of the software application store for the same service the gatekeeper provides to itself. This obligation should not establish an access right and it should be without prejudice to the ability of providers of software application stores, online search engines and online social networking services to take the required responsibility in the fight against illegal and unwanted content as set out in a Regulation on a single market for digital services.
Gatekeepers can hamper the ability of business users and end users to unsubscribe from a core platform service that they have previously subscribed to. Therefore, rules should be established to avoid a situation in which gatekeepers undermine the rights of business users and end users to freely choose which core platform service they use. To safeguard free choice of business users and end users, a gatekeeper should not be allowed to make it unnecessarily difficult or complicated for business users or end users to unsubscribe from a core platform service. Closing an account or un-subscribing should not be made be more complicated than opening an account or subscribing to the same service. Gatekeepers should not demand additional fees when terminating contracts with their end users or business users. Gatekeepers should ensure that the conditions for terminating contracts are always proportionate and can be exercised without undue difficulty by end users, such as, for example, in relation to the reasons for termination, the notice period, or the form of such termination. This is without prejudice to national legislation applicable in accordance with the Union law laying down rights and obligations concerning conditions of termination of provision of core platform services by end users.
The lack of interoperability allows gatekeepers that provide number-independent interpersonal communications services to benefit from strong network effects, which contributes to the weakening of contestability. Furthermore, regardless of whether end users ‘multi-home’, gatekeepers often provide number-independent interpersonal communications services as part of their platform ecosystem, and this further exacerbates entry barriers for alternative providers of such services and increases costs for end users to switch. Without prejudice to Directive (EU) 2018/1972 of the European Parliament and of the Council [^14] and, in particular, the conditions and procedures laid down in Article 61 thereof, gatekeepers should therefore ensure, free of charge and upon request, interoperability with certain basic functionalities of their number-independent interpersonal communications services that they provide to their own end users, to third-party providers of such services. Gatekeepers should ensure interoperability for third-party providers of number-independent interpersonal communications services that offer or intend to offer their number-independent interpersonal communications services to end users and business users in the Union. To facilitate the practical implementation of such interoperability, the gatekeeper concerned should be required to publish a reference offer laying down the technical details and general terms and conditions of interoperability with its number-independent interpersonal communications services. It should be possible for the Commission, if applicable, to consult the Body of European Regulators for Electronic Communications, in order to determine whether the technical details and the general terms and conditions published in the reference offer that the gatekeeper intends to implement or has implemented ensures compliance with this obligation. In all cases, the gatekeeper and the requesting provider should ensure that interoperability does not undermine a high level of security and data protection in line with their obligations laid down in this Regulation and applicable Union law, in particular Regulation (EU) 2016/679 and Directive 2002/58/EC. The obligation related to interoperability should be without prejudice to the information and choices to be made available to end users of the number-independent interpersonal communication services of the gatekeeper and the requesting provider under this Regulation and other Union law, in particular Regulation (EU) 2016/679.
To ensure the effectiveness of the obligations laid down by this Regulation, while also making certain that those obligations are limited to what is necessary to ensure contestability and tackling the harmful effects of the unfair practices by gatekeepers, it is important to clearly define and circumscribe them so as to allow the gatekeeper to fully comply with them, whilst fully complying with applicable law, and in particular Regulation (EU) 2016/679 and Directive 2002/58/EC and legislation on consumer protection, cyber security, product safety and accessibility requirements, including Directive (EU) 2019/882 and Directive (EU) 2016/2102 of the European Parliament and of the Council [^15]. The gatekeepers should ensure the compliance with this Regulation by design. Therefore, the necessary measures should be integrated as much as possible into the technological design used by the gatekeepers. It may in certain cases be appropriate for the Commission, following a dialogue with the gatekeeper concerned and after enabling third parties to make comments, to further specify some of the measures that the gatekeeper concerned should adopt in order to effectively comply with obligations that are susceptible of being further specified or, in the event of circumvention, with all obligations. In particular, such further specification should be possible where the implementation of an obligation susceptible to being further specified can be affected by variations of services within a single category of core platform services. For this purpose, it should be possible for the gatekeeper to request the Commission to engage in a process whereby the Commission can further specify some of the measures that the gatekeeper concerned should adopt in order to effectively comply with those obligations. The Commission should have discretion as to whether and when such further specification should be provided, while respecting the principles of equal treatment, proportionality, and good administration. In this respect, the Commission should provide the main reasons underlying its assessment, including any enforcement priorities. This process should not be used to undermine the effectiveness of this Regulation. Furthermore, this process is without prejudice to the powers of the Commission to adopt a decision establishing non-compliance with any of the obligations laid down in this Regulation by a gatekeeper, including the possibility to impose fines or periodic penalty payments. The Commission should be able to reopen proceedings, including where the specified measures turn out not to be effective. A reopening due to an ineffective specification adopted by decision should enable the Commission to amend the specification prospectively. The Commission should also be able to set a reasonable time period within which the proceedings can be reopened if the specified measures turn out not to be effective.
As an additional element to ensure proportionality, gatekeepers should be given an opportunity to request the suspension, to the extent necessary, of a specific obligation in exceptional circumstances that lie beyond the control of the gatekeeper, such as an unforeseen external shock that has temporarily eliminated a significant part of end user demand for the relevant core platform service, where compliance with a specific obligation is shown by the gatekeeper to endanger the economic viability of the Union operations of the gatekeeper concerned. The Commission should identify the exceptional circumstances justifying the suspension and review it on a regular basis in order to assess whether the conditions for granting it are still viable.
In exceptional circumstances, justified on the limited grounds of public health or public security laid down in Union law and interpreted by the Court of Justice, the Commission should be able to decide that a specific obligation does not apply to a specific core platform service. If harm is caused to such public interests that could indicate that the cost to society as a whole of enforcing a certain obligation is, in a specific exceptional case, too high and thus disproportionate. Where appropriate, the Commission should be able to facilitate compliance by assessing whether a limited and duly justified suspension or exemption is justified. This should ensure the proportionality of the obligations in this Regulation without undermining the intended ex ante effects on fairness and contestability. Where such an exemption is granted, the Commission should review its decision every year.
Within the timeframe for complying with their obligations under this Regulation, gatekeepers should inform the Commission, through mandatory reporting, about the measures they intend to implement or have implemented in order to ensure effective compliance with those obligations, including those measures concerning compliance with Regulation (EU) 2016/679, to the extent they are relevant for compliance with the obligations provided under this Regulation, which should allow the Commission to fulfil its duties under this Regulation. In addition, a clear and comprehensible non-confidential summary of such information should be made publicly available while taking into account the legitimate interest of gatekeepers in the protection of their business secrets and other confidential information. This non-confidential publication should enable third parties to assess whether the gatekeepers comply with the obligations laid down in this Regulation. Such reporting should be without prejudice to any enforcement action by the Commission at any time following the reporting. The Commission should publish online a link to the non-confidential summary of the report, as well as all other public information based on information obligations under this Regulation, in order to ensure accessibility of such information in a usable and comprehensive manner, in particular for small and medium enterprises (SMEs).
The obligations of gatekeepers should only be updated after a thorough investigation into the nature and impact of specific practices that may be newly identified, following an in-depth investigation, as unfair or limiting contestability in the same manner as the unfair practices laid down in this Regulation while potentially escaping the scope of the current set of obligations. The Commission should be able to launch an investigation with a view to determining whether the existing obligations need to be updated, either on its own initiative or following a justified request of at least three Member States. When presenting such justified requests, it should be possible for Member States to include information on newly introduced offers of products, services, software or features which raise concerns of contestability or fairness, whether implemented in the context of existing core platform services or otherwise. Where, following a market investigation, the Commission deems it necessary to modify essential elements of this Regulation, such as the inclusion of new obligations that depart from the same contestability or fairness issues addressed by this Regulation, the Commission should advance a proposal to amend this Regulation.
Given the substantial economic power of gatekeepers, it is important that the obligations are applied effectively and are not circumvented. To that end, the rules in question should apply to any practice by a gatekeeper, irrespective of its form and irrespective of whether it is of a contractual, commercial, technical or any other nature, insofar as the practice corresponds to the type of practice that is the subject of one of the obligations laid down by this Regulation. Gatekeepers should not engage in behaviour that would undermine the effectiveness of the prohibitions and obligations laid down in this Regulation. Such behaviour includes the design used by the gatekeeper, the presentation of end-user choices in a non-neutral manner, or using the structure, function or manner of operation of a user interface or a part thereof to subvert or impair user autonomy, decision-making, or choice. Furthermore, the gatekeeper should not be allowed to engage in any behaviour undermining interoperability as required under this Regulation, such as for example by using unjustified technical protection measures, discriminatory terms of service, unlawfully claiming a copyright on application programming interfaces or providing misleading information. Gatekeepers should not be allowed to circumvent their designation by artificially segmenting, dividing, subdividing, fragmenting or splitting their core platform services to circumvent the quantitative thresholds laid down in this Regulation.
To ensure the effectiveness of the review of gatekeeper status, as well as the possibility to adjust the list of core platform services provided by a gatekeeper, the gatekeepers should inform the Commission of all of their intended acquisitions, prior to their implementation, of other undertakings providing core platform services or any other services provided within the digital sector or other services that enable the collection of data. Such information should not only serve the review process regarding the status of individual gatekeepers, but will also provide information that is crucial to monitoring broader contestability trends in the digital sector and can therefore be a useful factor for consideration in the context of the market investigations provided for by this Regulation. Furthermore, the Commission should inform Member States of such information, given the possibility of using the information for national merger control purposes and as, under certain circumstances, it is possible for the national competent authority to refer those acquisitions to the Commission for the purposes of merger control. The Commission should also publish annually a list of acquisitions of which it has been informed by the gatekeeper. To ensure the necessary transparency and usefulness of such information for different purposes provided for by this Regulation, gatekeepers should provide at least information about the undertakings concerned by the concentration, their Union and worldwide annual turnover, their field of activity, including activities directly related to the concentration, the transaction value or an estimation thereof, a summary of the concentration, including its nature and rationale, as well as a list of the Member States concerned by the operation.
The data protection and privacy interests of end users are relevant to any assessment of potential negative effects of the observed practice of gatekeepers to collect and accumulate large amounts of data from end users. Ensuring an adequate level of transparency of profiling practices employed by gatekeepers, including, but not limited to, profiling within the meaning of Article 4, point (4), of Regulation (EU) 2016/679, facilitates contestability of core platform services. Transparency puts external pressure on gatekeepers not to make deep consumer profiling the industry standard, given that potential entrants or start-ups cannot access data to the same extent and depth, and at a similar scale. Enhanced transparency should allow other undertakings providing core platform services to differentiate themselves better through the use of superior privacy guarantees. To ensure a minimum level of effectiveness of this transparency obligation, gatekeepers should at least provide an independently audited description of the basis upon which profiling is performed, including whether personal data and data derived from user activity in line with Regulation (EU) 2016/679 is relied on, the processing applied, the purpose for which the profile is prepared and eventually used, the duration of the profiling, the impact of such profiling on the gatekeeper’s services, and the steps taken to effectively enable end users to be aware of the relevant use of such profiling, as well as steps to seek their consent or provide them with the possibility of denying or withdrawing consent. The Commission should transfer the audited description to the European Data Protection Board to inform the enforcement of Union data protection rules. The Commission should be empowered to develop the methodology and procedure for the audited description, in consultation with the European Data Protection Supervisor, the European Data Protection Board, civil society and experts, in line with Regulations (EU) No 182/2011 [^16] and (EU) 2018/1725 [^17] of the European Parliament and of the Council.
In order to ensure the full and lasting achievement of the objectives of this Regulation, the Commission should be able to assess whether an undertaking providing core platform services should be designated as a gatekeeper without meeting the quantitative thresholds laid down in this Regulation; whether systematic non-compliance by a gatekeeper warrants imposing additional remedies; whether more services within the digital sector should be added to the list of core platform services; and whether additional practices that are similarly unfair and limiting the contestability of digital markets need to be investigated. Such assessment should be based on market investigations to be carried out in an appropriate timeframe, by using clear procedures and deadlines, in order to support the ex ante effect of this Regulation on contestability and fairness in the digital sector, and to provide the requisite degree of legal certainty.
The Commission should be able to find, following a market investigation, that an undertaking providing a core platform service fulfils all of the overarching qualitative criteria for being identified as a gatekeeper. That undertaking should then, in principle, comply with all of the relevant obligations laid down by this Regulation. However, for gatekeepers that have been designated by the Commission because it is foreseeable that they will enjoy an entrenched and durable position in the near future, the Commission should only impose those obligations that are necessary and appropriate to prevent that the gatekeeper concerned achieves an entrenched and durable position in its operations. With respect to such emerging gatekeepers, the Commission should take into account that this status is in principle of a temporary nature, and it should therefore be decided at a given moment whether such an undertaking providing core platform services should be subjected to the full set of gatekeeper obligations because it has acquired an entrenched and durable position, or the conditions for designation are ultimately not met and therefore all previously imposed obligations should be waived.
The Commission should investigate and assess whether additional behavioural, or, where appropriate, structural remedies are justified, in order to ensure that the gatekeeper cannot frustrate the objectives of this Regulation by systematic non-compliance with one or several of the obligations laid down in this Regulation. This is the case where the Commission has issued against a gatekeeper at least three non-compliance decisions within the period of 8 years, which can concern different core platform services and different obligations laid down in this Regulation, and if the gatekeeper has maintained, extended or further strengthened its impact in the internal market, the economic dependency of its business users and end users on the gatekeeper’s core platform services or the entrenchment of its position. A gatekeeper should be deemed to have maintained, extended or strengthened its gatekeeper position where, despite the enforcement actions taken by the Commission, that gatekeeper still holds or has further consolidated or entrenched its importance as a gateway for business users to reach end users. The Commission should in such cases have the power to impose any remedy, whether behavioural or structural, having due regard to the principle of proportionality. In this context, the Commission should have the power to prohibit, to the extent that such remedy is proportionate and necessary in order to maintain or restore fairness and contestability as affected by the systematic non-compliance, during a limited time-period, the gatekeeper from entering into a concentration regarding those core platform services or the other services provided in the digital sector or services enabling the collection of data that are affected by the systematic non-compliance. In order to enable effective involvement of third parties and the possibility to test remedies before its application, the Commission should publish a detailed non-confidential summary of the case and the measures to be taken. The Commission should be able to reopen proceedings, including where the specified remedies turn out not to be effective. A reopening due to ineffective remedies adopted by decision should enable the Commission to amend the remedies prospectively. The Commission should also be able to set a reasonable time period within which it should be possible to reopen the proceedings if the remedies prove not to be effective.
Where, in the course of an investigation into systematic non-compliance, a gatekeeper offers commitments to the Commission, the latter should be able to adopt a decision making these commitments binding on the gatekeeper concerned, where it finds that the commitments ensure effective compliance with the obligations set out in this Regulation. That decision should also find that there are no longer grounds for action by the Commission as regards the systematic non-compliance under investigation. In assessing whether the commitments offered by the gatekeeper are sufficient to ensure effective compliance with the obligations under this Regulation, the Commission should be allowed to take into account tests undertaken by the gatekeeper to demonstrate the effectiveness of the offered commitments in practice. The Commission should verify that the commitments decision is fully respected and reaches its objectives, and should be entitled to reopen the decision if it finds that the commitments are not effective.
The services in the digital sector and the types of practices relating to these services can change quickly and to a significant extent. To ensure that this Regulation remains up to date and constitutes an effective and holistic regulatory response to the problems posed by gatekeepers, it is important to provide for a regular review of the lists of core platform services, as well as of the obligations provided for in this Regulation. This is particularly important to ensure that a practice that is likely to limit the contestability of core platform services or is unfair is identified. While it is important to conduct a review on a regular basis, given the dynamically changing nature of the digital sector, in order to ensure legal certainty as to the regulatory conditions, any reviews should be conducted within a reasonable and appropriate timeframe. Market investigations should also ensure that the Commission has a solid evidentiary basis on which it can assess whether it should propose to amend this Regulation in order to review, expand, or further detail, the lists of core platform services. They should equally ensure that the Commission has a solid evidentiary basis on which it can assess whether it should propose to amend the obligations laid down in this Regulation or whether it should adopt a delegated act updating such obligations.
With regard to conduct by gatekeepers that is not covered by the obligations set out in this Regulation, the Commission should have the possibility to open a market investigation into new services and new practices for the purposes of identifying whether the obligations set out in this Regulation are to be supplemented by means of a delegated act falling within the scope of the empowerment set out for such delegated acts in this Regulation, or by presenting a proposal to amend this Regulation. This is without prejudice to the possibility for the Commission to, in appropriate cases, open proceedings under Article 101 or 102 TFEU. Such proceedings should be conducted in accordance with Council Regulation (EC) No 1/2003 [^18]. In cases of urgency due to the risk of serious and irreparable damage to competition, the Commission should consider adopting interim measures in accordance with Article 8 of Regulation (EC) No 1/2003.
In the event that gatekeepers engage in a practice that is unfair or that limits the contestability of the core platform services that are already designated under this Regulation but without such practices being explicitly covered by the obligations laid down by this Regulation, the Commission should be able to update this Regulation through delegated acts. Such updates by way of delegated act should be subject to the same investigatory standard and therefore should be preceded by a market investigation. The Commission should also apply a predefined standard in identifying such types of practices. This legal standard should ensure that the type of obligations that gatekeepers could at any time face under this Regulation are sufficiently predictable.
In order to ensure effective implementation and compliance with this Regulation, the Commission should have strong investigative and enforcement powers, to allow it to investigate, enforce and monitor the rules laid down in this Regulation, while at the same time ensuring the respect for the fundamental right to be heard and to have access to the file in the context of the enforcement proceedings. The Commission should dispose of these investigative powers also for the purpose of carrying out market investigations, including for the purpose of updating and reviewing this Regulation.
The Commission should be empowered to request information necessary for the purpose of this Regulation. In particular, the Commission should have access to any relevant documents, data, database, algorithm and information necessary to open and conduct investigations and to monitor the compliance with the obligations laid down in this Regulation, irrespective of who possesses such information, and regardless of their form or format, their storage medium, or the place where they are stored.
The Commission should be able to directly request that undertakings or associations of undertakings provide any relevant evidence, data and information. In addition, the Commission should be able to request any relevant information from competent authorities within the Member State, or from any natural person or legal person for the purpose of this Regulation. When complying with a decision of the Commission, undertakings are obliged to answer factual questions and to provide documents.
The Commission should also be empowered to conduct inspections of any undertaking or association of undertakings and to interview any persons who could be in possession of useful information and to record the statements made.
Interim measures can be an important tool to ensure that, while an investigation is ongoing, the infringement being investigated does not lead to serious and irreparable damage for business users or end users of gatekeepers. This tool is important to avoid developments that could be very difficult to reverse by a decision taken by the Commission at the end of the proceedings. The Commission should therefore have the power to order interim measures in the context of proceedings opened in view of the possible adoption of a non-compliance decision. This power should apply in cases where the Commission has made a prima facie finding of infringement of obligations by gatekeepers and where there is a risk of serious and irreparable damage for business users or end users of gatekeepers. Interim measures should only apply for a specified period, either one ending with the conclusion of the proceedings by the Commission, or for a fixed period which can be renewed insofar as it is necessary and appropriate.
The Commission should be able to take the necessary actions to monitor the effective implementation of and compliance with the obligations laid down in this Regulation. Such actions should include the ability of the Commission to appoint independent external experts and auditors to assist the Commission in this process, including, where applicable, from competent authorities of the Member States, such as data or consumer protection authorities. When appointing auditors, the Commission should ensure sufficient rotation.
Compliance with the obligations imposed by this Regulation should be enforceable by means of fines and periodic penalty payments. To that end, appropriate levels of fines and periodic penalty payments should also be laid down for non-compliance with the obligations and breach of the procedural rules subject to appropriate limitation periods, in accordance with the principles of proportionality and ne bis in idem. The Commission and the relevant national authorities should coordinate their enforcement efforts in order to ensure that those principles are respected. In particular, the Commission should take into account any fines and penalties imposed on the same legal person for the same facts through a final decision in proceedings relating to an infringement of other Union or national rules, so as to ensure that the overall fines and penalties imposed correspond to the seriousness of the infringements committed.
In order to ensure effective recovery of fines imposed on associations of undertakings for infringements that they have committed, it is necessary to lay down the conditions on which it should be possible for the Commission to require payment of the fine from the members of that association of undertakings where it is not solvent.
In the context of proceedings carried out under this Regulation, the undertaking concerned should be accorded the right to be heard by the Commission and the decisions taken should be widely publicised. While ensuring the rights to good administration, the right of access to the file and the right to be heard, it is essential to protect confidential information. Furthermore, while respecting the confidentiality of the information, the Commission should ensure that any information on which the decision is based is disclosed to an extent that allows the addressee of the decision to understand the facts and considerations that led to the decision. It is also necessary to ensure that the Commission only uses information collected pursuant to this Regulation for the purposes of this Regulation, except where specifically envisaged otherwise. Finally, it should be possible, under certain conditions, for certain business records, such as communication between lawyers and their clients, to be considered confidential if the relevant conditions are met.
When preparing non-confidential summaries for publication in order to effectively enable interested third parties to provide comments, the Commission should give due regard to the legitimate interest of undertakings in the protection of their business secrets and other confidential information.
The coherent, effective and complementary enforcement of available legal instruments applied to gatekeepers requires cooperation and coordination between the Commission and national authorities within the remit of their competences. The Commission and national authorities should cooperate and coordinate their actions necessary for the enforcement of the available legal instruments applied to gatekeepers within the meaning of this Regulation and respect the principle of sincere cooperation laid down in Article 4 of the Treaty on European Union (TEU). It should be possible for the support from national authorities to the Commission to include providing the Commission with all necessary information in their possession or assisting the Commission, at its request, with the exercise of its powers so that the Commission is better able to carry out its duties under this Regulation.
The Commission is the sole authority empowered to enforce this Regulation. In order to support the Commission, it should be possible for Member States to empower their national competent authorities enforcing competition rules to conduct investigations into possible non-compliance by gatekeepers with certain obligations under this Regulation. This could in particular be relevant for cases where it cannot be determined from the outset whether a gatekeeper’s behaviour is capable of infringing this Regulation, the competition rules which the national competent authority is empowered to enforce, or both. The national competent authority enforcing competition rules should report on its findings on possible non-compliance by gatekeepers with certain obligations under this Regulation to the Commission in view of the Commission opening proceedings to investigate any non-compliance as the sole enforcer of the provisions laid down by this Regulation. The Commission should have full discretion to decide whether to open such proceedings. In order to avoid overlapping investigations under this Regulation, the national competent authority concerned should inform the Commission before taking its first investigative measure into a possible non-compliance by gatekeepers with certain obligations under this Regulation. The national competent authorities should also closely cooperate and coordinate with the Commission when enforcing national competition rules against gatekeepers, including with regard to the setting of fines. To that end, they should inform the Commission when initiating proceedings based on national competition rules against gatekeepers, as well as prior to imposing obligations on gatekeepers in such proceedings. In order to avoid duplication, it should be possible for information of the draft decision pursuant to Article 11 of Regulation (EC) No 1/2003, where applicable, to serve as notification under this Regulation.
In order to safeguard the harmonised application and enforcement of this Regulation, it is important to ensure that national authorities, including national courts, have all necessary information to ensure that their decisions do not run counter to a decision adopted by the Commission under this Regulation. National courts should be allowed to ask the Commission to send them information or opinions on questions concerning the application of this Regulation. At the same time, the Commission should be able to submit oral or written observations to national courts. This is without prejudice to the ability of national courts to request a preliminary ruling under Article 267 TFEU.
In order to ensure coherence and effective complementarity in the implementation of this Regulation and of other sectoral regulations applicable to gatekeepers, the Commission should benefit from the expertise of a dedicated high-level group. It should be possible for that high-level group to also assist the Commission by means of advice, expertise and recommendations, when relevant, in general matters relating to the implementation or enforcement of this Regulation. The high-level group should be composed of the relevant European bodies and networks, and its composition should ensure a high level of expertise and a geographical balance. The members of the high-level group should regularly report to the bodies and networks they represent regarding the tasks performed in the context of the group, and consult them in that regard.
Since the decisions taken by the Commission under this Regulation are subject to review by the Court of Justice in accordance with the TFEU, in accordance with Article 261 TFEU, the Court of Justice should have unlimited jurisdiction in respect of fines and penalty payments.
It should be possible for the Commission to develop guidelines to provide further guidance on different aspects of this Regulation or to assist undertakings providing core platform services in the implementation of the obligations under this Regulation. It should be possible for such guidance to be based in particular on the experience that the Commission obtains through the monitoring of compliance with this Regulation. The issuing of any guidelines under this Regulation is a prerogative and at the sole discretion of the Commission and should not be considered to be a constitutive element in ensuring that the undertakings or associations of undertakings concerned comply with the obligations under this Regulation.
The implementation of some of the gatekeepers’ obligations, such as those related to data access, data portability or interoperability could be facilitated by the use of technical standards. In this respect, it should be possible for the Commission, where appropriate and necessary, to request European standardisation bodies to develop them.
In order to ensure contestable and fair markets in the digital sector across the Union where gatekeepers are present, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission in respect of amending the methodology for determining whether the quantitative thresholds regarding active end users and active business users for the designation of gatekeepers are met, which is contained in an Annex to this Regulation, in respect of further specifying the additional elements of the methodology not falling in that Annex for determining whether the quantitative thresholds regarding the designation of gatekeepers are met, and in respect of supplementing the existing obligations laid down in this Regulation where, based on a market investigation, the Commission has identified the need for updating the obligations addressing practices that limit the contestability of core platform services or are unfair, and the update considered falls within the scope of the empowerment set out for such delegated acts in this Regulation.
When adopting delegated acts under this Regulation, it is of particular importance that the Commission carries out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making [^19]. In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council should receive all documents at the same time as Member States’ experts, and their experts systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.
In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be conferred on the Commission to specify measures to be implemented by gatekeepers in order to effectively comply with the obligations under this Regulation; to suspend, in whole or in part, a specific obligation imposed on a gatekeeper; to exempt a gatekeeper, in whole or in part, from a specific obligation; to specify the measures to be implemented by a gatekeeper when it circumvents the obligations under this Regulation; to conclude a market investigation for designating gatekeepers; to impose remedies in the case of systematic non-compliance; to order interim measures against a gatekeeper; to make commitments binding on a gatekeeper; to set out its finding of a non-compliance; to set the definitive amount of the periodic penalty payment; to determine the form, content and other details of notifications, submissions of information, reasoned requests and regulatory reports transmitted by gatekeepers; to lay down operational and technical arrangements in view of implementing interoperability and the methodology and procedure for the audited description of techniques used for profiling consumers; to provide for practical arrangements for proceedings, extensions of deadlines, exercising rights during proceedings, terms of disclosure, as well as for the cooperation and coordination between the Commission and national authorities. Those powers should be exercised in accordance with Regulation (EU) No 182/2011.
The examination procedure should be used for the adoption of an implementing act on the practical arrangements for the cooperation and coordination between the Commission and Member States. The advisory procedure should be used for the remaining implementing acts envisaged by this Regulation. This is justified by the fact that those remaining implementing acts relate to practical aspects of the procedures laid down in this Regulation, such as form, content and other details of various procedural steps, to practical arrangements of different procedural steps, such as, for example, extension of procedural deadlines or right to be heard, as well as to individual implementing decisions addressed to a gatekeeper.
In accordance with Regulation (EU) No 182/2011, each Member State should be represented in the advisory committee and decide on the composition of its delegation. Such delegation can include, inter alia, experts from the competent authorities within the Member States, which hold the relevant expertise for a specific issue presented to the advisory committee.
Whistleblowers can bring new information to the attention of competent authorities which can help the competent authorities detect infringements of this Regulation and enable them to impose penalties. It should be ensured that adequate arrangements are in place to enable whistleblowers to alert the competent authorities to actual or potential infringements of this Regulation and to protect the whistleblowers from retaliation. For that purpose, it should be provided in this Regulation that Directive (EU) 2019/1937 of the European Parliament and of the Council [^20] is applicable to the reporting of breaches of this Regulation and to the protection of persons reporting such breaches.
To enhance legal certainty, the applicability, pursuant to this Regulation, of Directive (EU) 2019/1937 to reports of breaches of this Regulation and to the protection of persons reporting such breaches should be reflected in that Directive. The Annex to Directive (EU) 2019/1937 should therefore be amended accordingly. It is for the Member States to ensure that that amendment is reflected in their transposition measures adopted in accordance with Directive (EU) 2019/1937, although the adoption of national transposition measures is not a condition for the applicability of that Directive to the reporting of breaches of this Regulation and to the protection of reporting persons from the date of application of this Regulation.
Consumers should be entitled to enforce their rights in relation to the obligations imposed on gatekeepers under this Regulation through representative actions in accordance with Directive (EU) 2020/1828 of the European Parliament and of the Council [^21]. For that purpose, this Regulation should provide that Directive (EU) 2020/1828 is applicable to the representative actions brought against infringements by gatekeepers of provisions of this Regulation that harm or can harm the collective interests of consumers. The Annex to that Directive should therefore be amended accordingly. It is for the Member States to ensure that that amendment is reflected in their transposition measures adopted in accordance with Directive (EU) 2020/1828, although the adoption of national transposition measures in this regard is not a condition for the applicability of that Directive to those representative actions. The applicability of Directive (EU) 2020/1828 to the representative actions brought against infringements by gatekeepers of provisions of this Regulation that harm or can harm the collective interests of consumers should start from the date of application of Member States’ laws, regulations and administrative provisions necessary to transpose that Directive, or from the date of application of this Regulation, whichever is the later.
The Commission should periodically evaluate this Regulation and closely monitor its effects on the contestability and fairness of commercial relationships in the online platform economy, in particular with a view to determining the need for amendments in light of relevant technological or commercial developments. That evaluation should include the regular review of the list of core platform services and the obligations addressed to gatekeepers, as well as their enforcement, in view of ensuring that digital markets across the Union are contestable and fair. In that context, the Commission should also evaluate the scope of the obligation concerning the interoperability of number-independent electronic communications services. In order to obtain a broad view of developments in the digital sector, the evaluation should take into account the experiences of Member States and relevant stakeholders. It should be possible for the Commission in this regard also to consider the opinions and reports presented to it by the Observatory on the Online Platform Economy that was first established by Commission Decision C(2018)2393 of 26 April 2018. Following the evaluation, the Commission should take appropriate measures. The Commission should maintain a high level of protection and respect for the common rights and values, particularly equality and non-discrimination, as an objective when conducting the assessments and reviews of the practices and obligations provided in this Regulation.
Without prejudice to the budgetary procedure and through existing financial instruments, adequate human, financial and technical resources should be allocated to the Commission to ensure that it can effectively perform its duties and exercise its powers in respect of the enforcement of this Regulation.
Since the objective of this Regulation, namely to ensure a contestable and fair digital sector in general and core platform services in particular, with a view to promoting innovation, high quality of digital products and services, fair and competitive prices, as well as a high quality and choice for end users in the digital sector, cannot be sufficiently achieved by the Member States, but can rather, by reason of the business model and operations of the gatekeepers and the scale and effects of their operations, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the principle of proportionality, as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
The European Data Protection Supervisor was consulted in accordance with Article 42 of Regulation (EU) 2018/1725 and delivered an opinion on 10 February 2021 [^22].
This Regulation respects the fundamental rights and observes the principles recognised by the Charter of Fundamental Rights of the European Union, in particular Articles 16, 47 and 50 thereof. Accordingly, the interpretation and application of this Regulation should respect those rights and principles, HAVE ADOPTED THIS REGULATION:
For the purposes of this Regulation, the following definitions apply:
(b) there is an imbalance between the rights and obligations of business users and the gatekeeper obtains an advantage from business users that is disproportionate to the service provided by that gatekeeper to those business users.
In case of urgency due to the risk of serious and irreparable damage for business users or end users of gatekeepers, the Commission may adopt an implementing act ordering interim measures against a gatekeeper on the basis of a prima facie finding of an infringement of Article 5, 6 or 7. That implementing act shall be adopted only in the context of proceedings opened with a view to the possible adoption of a non-compliance decision pursuant to Article 29(1). It shall apply only for a specified period of time and may be renewed in so far this is necessary and appropriate. That implementing act shall be adopted in accordance with the advisory procedure referred to in Article 50(2).
Directive (EU) 2020/1828 shall apply to the representative actions brought against infringements by gatekeepers of provisions of this Regulation that harm or may harm the collective interests of consumers.
Directive (EU) 2019/1937 shall apply to the reporting of all breaches of this Regulation and the protection of persons reporting such breaches.
In accordance with Article 261 TFEU, the Court of Justice has unlimited jurisdiction to review decisions by which the Commission has imposed fines or periodic penalty payments. It may cancel, reduce or increase the fine or periodic penalty payment imposed.
The Commission may adopt guidelines on any of the aspects of this Regulation in order to facilitate its effective implementation and enforcement.
Where appropriate and necessary, the Commission may mandate European standardisation bodies to facilitate the implementation of the obligations set out in this Regulation by developing appropriate standards.
In Point J of Part I of the Annex to Directive (EU) 2019/1937, the following point is added: ‘(iv) Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (OJ L 265, 21.9.2022, p. 1).’
In Annex I to Directive (EU) 2020/1828, the following point is added: ‘(67) Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) (OJ L 265, 21.9.2022, p. 1).’
This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the European Union. It shall apply from 2 May 2023.
However, Article 3(6) and (7) and Articles 40, 46, 47, 48, 49 and 50 shall apply from 1 November 2022 and Article 42 and Article 43 shall apply from 25 June 2023. Nevertheless, if the date of 25 June 2023 precedes the date of application referred to in the second paragraph of this Article, the application of Article 42 and Article 43 shall be postponed until the date of application referred to in the second paragraph of this Article.
This Regulation shall be binding in its entirety and directly applicable in all Member States.
Done at Strasbourg, 14 September 2022.
For the European Parliament The President R. METSOLA For the Council The President M. BEK
[^1] OJ C 286, 16.7.2021, p. 64. [^2] OJ C 440, 29.10.2021, p. 67. [^3] Position of the European Parliament of 5 July 2022 (not yet published in the Official Journal) and decision of the Council of 18 July 2022. [^4] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1). [^5] Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services (OJ L 186, 11.7.2019, p. 57). [^6] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (OJ L 201, 31.7.2002, p. 37). [^7] Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (‘Unfair Commercial Practices Directive’) (OJ L 149, 11.6.2005, p. 22). [^8] Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) (OJ L 95, 15.4.2010, p. 1). [^9] Directive (EU) 2015/2366 of the European Parliament and of the Council of 25 November 2015 on payment services in the internal market, amending Directives 2002/65/EC, 2009/110/EC and 2013/36/EU and Regulation (EU) No 1093/2010, and repealing Directive 2007/64/EC (OJ L 337, 23.12.2015, p. 35). [^10] Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (OJ L 130, 17.5.2019, p. 92). [^11] Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for products and services (OJ L 151, 7.6.2019, p. 70). [^12] Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts (OJ L 95, 21.4.1993, p. 29). [^13] Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services (OJ L 241, 17.9.2015, p. 1). [^14] Directive (EU) 2018/1972 of the European Parliament and of the Council of 11 December 2018 establishing the European Electronic Communications Code (OJ L 321, 17.12.2018, p. 36). [^15] Directive (EU) 2016/2102 of the European Parliament and of the Council of 26 October 2016 on the accessibility of the websites and mobile applications of public sector bodies (OJ L 327, 2.12.2016, p. 1). [^16] Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and general principles concerning mechanisms for control by the Member States of the Commission’s exercise of implementing powers (OJ L 55, 28.2.2011, p. 13). [^17] Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39). [^18] Council Regulation (EC) No 1/2003 of 16 December 2002 on the implementation of the rules on competition laid down in Articles 81 and 82 of the Treaty (OJ L 1, 4.1.2003, p. 1). [^19] OJ L 123, 12.5.2016, p. 1. [^20] Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law (OJ L 305, 26.11.2019, p. 17). [^21] Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1). [^22] OJ C 147, 26.4.2021, p. 4. [^23] Council Regulation (EC) No 139/2004 of 20 January 2004 on the control of concentrations between undertakings (the EC Merger Regulation) (OJ L 24, 29.1.2004, p. 1). [^24] Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union (OJ L 194, 19.7.2016, p. 1).
A. ‘General’
This Annex aims at specifying the methodology for identifying and calculating the ‘active end users’ and the ‘active business users’ for each core platform service listed in Article 2, point (2). It provides a reference to enable an undertaking to assess whether its core platform services meet the quantitative thresholds set out in Article 3(2), point (b) and would therefore be presumed to meet the requirement in Article 3(1), point (b). Such reference will therefore equally be of relevance to any broader assessment under Article 3(8). It is the responsibility of the undertaking to come to the best approximation possible in line with the common principles and specific methodology set out in this Annex. Nothing in this Annex precludes the Commission, within the time limits laid down in the relevant provisions of this Regulation, from requiring the undertaking providing core platform services to provide any information necessary to identify and calculate the ‘active end users’ and the ‘active business users’. Nothing in this Annex should constitute a legal basis for tracking users. The methodology contained in this Annex is also without prejudice to any of the obligations laid down in this Regulation, notably in Article 3(3) and (8) and Article 13(3). In particular, the required compliance with Article 13(3) also means identifying and calculating ‘active end users’ and ‘active business users’ based either on a precise measurement or on the best approximation available, in line with the actual identification and calculation capacities that the undertaking providing core platform services possesses at the relevant point in time. Those measurements or the best approximation available shall be consistent with, and include, those reported under Article 15.
Article 2, points (20) and (21) set out the definitions of ‘end user’ and ‘business user’, which are common to all core platform services.
In order to identify and calculate the number of ‘active end users’ and ‘active business users’, this Annex refers to the concept of ‘unique users’. The concept of ‘unique users’ encompasses ‘active end users’ and ‘active business users’ counted only once, for the relevant core platform service, over the course of a specified time period (i.e. month in case of ‘active end users’ and year in case of ‘active business users’), no matter how many times they engaged with the relevant core platform service over that period. This is without prejudice to the fact that the same natural or legal person can simultaneously constitute an ‘active end user’ or an ‘active business user’ for different core platform services. B. ‘Active end users’
The number of ‘unique users’ as regards ‘active end users’ shall be identified according to the most accurate metric reported by the undertaking providing any of the core platform services, specifically: a. It is considered that collecting data about the use of core platform services from signed-in or logged-in environments would prima facie present the lowest risk of duplication, for example in relation to user behaviour across devices or platforms. Hence, the undertaking shall submit aggregate anonymized data on the number of unique end users per respective core platform service based on signed-in or logged-in environments, if such data exists. b. In the case of core platform services which are also accessed by end users outside signed-in or logged-in environments, the undertaking shall additionally submit aggregate anonymized data on the number of unique end users of the respective core platform service based on an alternate metric capturing also end users outside signed-in or logged-in environments, such as internet protocol addresses, cookie identifiers or other identifiers such as radio frequency identification tags, provided that those addresses or identifiers are objectively necessary for the provision of the core platform services.
The number of ‘monthly active end users’ is based on the average number of monthly active end users throughout the largest part of the financial year. The notion ‘the largest part of the financial year’ is intended to allow an undertaking providing core platform services to discount outlier figures in a given year. Outlier figures inherently mean figures that fall significantly outside the normal and foreseeable figures. An unforeseen peak or drop in user engagement that occurred during a single month of the financial year is an example of what could constitute such outlier figures. Figures related to annually recurring occurrences, such as annual sales promotions, are not outlier figures. C. ‘Active business users’ The number of ‘unique users’ as regards ‘active business users’ is to be determined, where applicable, at the account level with each distinct business account associated with the use of a core platform service provided by the undertaking constituting one unique business user of that respective core platform service. If the notion of ‘business account’ does not apply to a given core platform service, the relevant undertaking providing core platform services shall determine the number of unique business users by referring to the relevant undertaking. D. ‘Submission of information’
The undertaking submitting to the Commission pursuant to Article 3(3) information concerning the number of active end users and active business users per core platform service shall be responsible for ensuring the completeness and accuracy of that information. In that regard: a. The undertaking shall be responsible for submitting data for a respective core platform service that avoids under-counting and over-counting the number of active end users and active business users (for example, where users access the core platform services across different platforms or devices). b. The undertaking shall be responsible for providing precise and succinct explanations about the methodology used to arrive at the information and for any risk of under-counting or over-counting of the number of active end users and active business users for a respective core platform service and for the solutions adopted to address that risk. c. The undertaking shall provide data that is based on an alternative metric when the Commission has concerns about the accuracy of data provided by the undertaking providing core platform services.
For the purpose of calculating the number of ‘active end users’ and ‘active business users’: a. The undertaking providing core platform service(s) shall not identify core platform services that belong to the same category of core platform services pursuant to Article 2, point (2) as distinct mainly on the basis that they are provided using different domain names, whether country code top-level domains (ccTLDs) or generic top-level domains (gTLDs), or any geographic attributes. b. The undertaking providing core platform service(s) shall consider as distinct core platform services those core platform services, which are used for different purposes by either their end users or their business users, or both, even if their end users or business users may be the same and even if they belong to the same category of core platform services pursuant to Article 2, point (2). c. The undertaking providing core platform service(s) shall consider as distinct core platform services those services which the relevant undertaking offers in an integrated way, but which: (i) do not belong to the same category of core platform services pursuant to Article 2, point (2); or (ii) are used for different purposes by either their end users or their business users, or both, even if their end users and business users may be the same and even if they belong to the same category of core platform services pursuant to Article 2, point (2).
E. ‘Specific definitions’ The table below sets out specific definitions of ‘active end users’ and ‘active business users’ for each core platform service. Core platform services Active end users Active business users Online intermediation services Number of unique end users who engaged with the online intermediation service at least once in the month for example through actively logging-in, making a query, clicking or scrolling or concluded a transaction through the online intermediation service at least once in the month. Number of unique business users who had at least one item listed in the online intermediation service during the whole year or concluded a transaction enabled by the online intermediation service during the year. Online search engines Number of unique end users who engaged with the online search engine at least once in the month, for example through making a query. Number of unique business users with business websites (i.e. website used in commercial or professional capacity) indexed by or part of the index of the online search engine during the year. Online social networking services Number of unique end users who engaged with the online social networking service at least once in the month, for example through actively logging-in, opening a page, scrolling, clicking, liking, making a query, posting or commenting. Number of unique business users who have a business listing or business account in the online social networking service and have engaged in any way with the service at least once during the year, for example through actively logging-in, opening a page, scrolling, clicking, liking, making a query, posting, commenting or using its tools for businesses. Video-sharing platform services Number of unique end users who engaged with the video-sharing platform service at least once in the month, for example through playing a segment of audiovisual content, making a query or uploading a piece of audiovisual content, notably including user-generated videos. Number of unique business users who provided at least one piece of audiovisual content uploaded or played on the video-sharing platform service during the year. Number-independent interpersonal communication services Number of unique end users who initiated or participated in any way in a communication through the number-independent interpersonal communication service at least once in the month. Number of unique business users who used a business account or otherwise initiated or participated in any way in a communication through the number-independent interpersonal communication service to communicate directly with an end user at least once during the year. Operating systems Number of unique end users who utilised a device with the operating system, which has been activated, updated or used at least once in the month. Number of unique developers who published, updated or offered at least one software application or software program using the programming language or any software development tools of, or running in any way on, the operating system during the year. Virtual assistant Number of unique end users who engaged with the virtual assistant in any way at least once in the month, such as for example through activating it, asking a question, accessing a service through a command or controlling a smart home device. Number of unique developers who offered at least one virtual assistant software application or a functionality to make an existing software application accessible through the virtual assistant during the year. Web browsers Number of unique end users who engaged with the web browser at least once in the month, for example through inserting a query or website address in the URL line of the web browser. Number of unique business users whose business websites (i.e. website used in commercial or professional capacity) have been accessed via the web browser at least once during the year or who offered a plug-in, extension or add-ons used on the web browser during the year. Cloud computing services Number of unique end users who engaged with any cloud computing services from the relevant provider of cloud computing services at least once in the month, in return for any type of remuneration, regardless of whether this remuneration occurs in the same month. Number of unique business users who provided any cloud computing services hosted in the cloud infrastructure of the relevant provider of cloud computing services during the year. Online advertising services For proprietary sales of advertising space: Number of unique end users who were exposed to an advertisement impression at least once in the month. For advertising intermediation services (including advertising networks, advertising exchanges and any other advertising intermediation services): Number of unique end users who were exposed to an advertisement impression which triggered the advertising intermediation service at least once in the month. For proprietary sales of advertising space:
Number of unique advertisers who had at least one advertisement impression displayed during the year.
For advertising intermediation services (including advertising networks, advertising exchanges and any other advertising intermediation services): Number of unique business users (including advertisers, publishers or other intermediators) who interacted via or were served by the advertising intermediation service during the year.
27.12.2022
EN
Official Journal of the European Union
L 333/1
REGULATION (EU) 2022/2554 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL
of 14 December 2022
on digital operational resilience for the financial sector and amending Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014 and (EU) 2016/1011
(Text with EEA relevance)
THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,
Having regard to the Treaty on the Functioning of the European Union, and in particular Article 114 thereof,
Having regard to the proposal from the European Commission,
After transmission of the draft legislative act to the national parliaments,
Having regard to the opinion of the European Central Bank (1),
Having regard to the opinion of the European Economic and Social Committee (2),
Acting in accordance with the ordinary legislative procedure (3),
Whereas:
In the digital age, information and communication technology (ICT) supports complex systems used for everyday activities. It keeps our economies running in key sectors, including the financial sector, and enhances the functioning of the internal market. Increased digitalisation and interconnectedness also amplify ICT risk, making society as a whole, and the financial system in particular, more vulnerable to cyber threats or ICT disruptions. While the ubiquitous use of ICT systems and high digitalisation and connectivity are today core features of the activities of Union financial entities, their digital resilience has yet to be better addressed and integrated into their broader operational frameworks.
The use of ICT has in the past decades gained a pivotal role in the provision of financial services, to the point where it has now acquired a critical importance in the operation of typical daily functions of all financial entities. Digitalisation now covers, for instance, payments, which have increasingly moved from cash and paper-based methods to the use of digital solutions, as well as securities clearing and settlement, electronic and algorithmic trading, lending and funding operations, peer-to-peer finance, credit rating, claim management and back-office operations. The insurance sector has also been transformed by the use of ICT, from the emergence of insurance intermediaries offering their services online operating with InsurTech, to digital insurance underwriting. Finance has not only become largely digital throughout the whole sector, but digitalisation has also deepened interconnections and dependencies within the financial sector and with third-party infrastructure and service providers.
The European Systemic Risk Board (ESRB) reaffirmed in a 2020 report addressing systemic cyber risk how the existing high level of interconnectedness across financial entities, financial markets and financial market infrastructures, and particularly the interdependencies of their ICT systems, could constitute a systemic vulnerability because localised cyber incidents could quickly spread from any of the approximately 22 000 Union financial entities to the entire financial system, unhindered by geographical boundaries. Serious ICT breaches that occur in the financial sector do not merely affect financial entities taken in isolation. They also smooth the way for the propagation of localised vulnerabilities across the financial transmission channels and potentially trigger adverse consequences for the stability of the Union’s financial system, such as generating liquidity runs and an overall loss of confidence and trust in financial markets.
In recent years, ICT risk has attracted the attention of international, Union and national policy makers, regulators and standard-setting bodies in an attempt to enhance digital resilience, set standards and coordinate regulatory or supervisory work. At international level, the Basel Committee on Banking Supervision, the Committee on Payments and Market Infrastructures, the Financial Stability Board, the Financial Stability Institute, as well as the G7 and G20 aim to provide competent authorities and market operators across various jurisdictions with tools to bolster the resilience of their financial systems. That work has also been driven by the need to duly consider ICT risk in the context of a highly interconnected global financial system and to seek more consistency of relevant best practices.
Despite Union and national targeted policy and legislative initiatives, ICT risk continues to pose a challenge to the operational resilience, performance and stability of the Union financial system. The reforms that followed the 2008 financial crisis primarily strengthened the financial resilience of the Union financial sector and aimed to safeguard the competitiveness and stability of the Union from economic, prudential and market conduct perspectives. Although ICT security and digital resilience are part of operational risk, they have been less in the focus of the post-financial crisis regulatory agenda and have developed in only some areas of the Union’s financial services policy and regulatory landscape, or in only a few Member States.
In its Communication of 8 March 2018 entitled ‘FinTech Action plan: For a more competitive and innovative European financial sector’, the Commission highlighted the paramount importance of making the Union financial sector more resilient, including from an operational perspective to ensure its technological safety and good functioning, its quick recovery from ICT breaches and incidents, ultimately enabling the effective and smooth provision of financial services across the whole Union, including under situations of stress, while also preserving consumer and market trust and confidence.
In April 2019, the European Supervisory Authority (European Banking Authority), (EBA) established by Regulation (EU) No 1093/2010 of the European Parliament and of the Council (4), the European Supervisory Authority (European Insurance and Occupational Pensions Authority), (‘EIOPA’) established by Regulation (EU) No 1094/2010 of the European Parliament and of the Council (5) and the European Supervisory Authority (European Securities and Markets Authority), (‘ESMA’) established by Regulation (EU) No 1095/2010 of the European Parliament and of the Council (6) (known collectively as ‘European Supervisory Authorities’ or ‘ESAs’) jointly issued technical advice calling for a coherent approach to ICT risk in finance and recommending to strengthen, in a proportionate way, the digital operational resilience of the financial services industry through a sector-specific initiative of the Union.
The Union financial sector is regulated by a Single Rulebook and governed by a European system of financial supervision. Nonetheless, provisions tackling digital operational resilience and ICT security are not yet fully or consistently harmonised, despite digital operational resilience being vital for ensuring financial stability and market integrity in the digital age, and no less important than, for example, common prudential or market conduct standards. The Single Rulebook and system of supervision should therefore be developed to also cover digital operational resilience, by strengthening the mandates of competent authorities to enable them to supervise the management of ICT risk in the financial sector in order to protect the integrity and efficiency of the internal market, and to facilitate its orderly functioning.
Legislative disparities and uneven national regulatory or supervisory approaches with regard to ICT risk trigger obstacles to the functioning of the internal market in financial services, impeding the smooth exercise of the freedom of establishment and the provision of services for financial entities operating on a cross-border basis. Competition between the same type of financial entities operating in different Member States could also be distorted. This is the case, in particular, for areas where Union harmonisation has been very limited, such as digital operational resilience testing, or absent, such as the monitoring of ICT third-party risk. Disparities stemming from developments envisaged at national level could generate further obstacles to the functioning of the internal market to the detriment of market participants and financial stability.
To date, due to the ICT risk related provisions being only partially addressed at Union level, there are gaps or overlaps in important areas, such as ICT-related incident reporting and digital operational resilience testing, and inconsistencies as a result of emerging divergent national rules or cost-ineffective application of overlapping rules. This is particularly detrimental for an ICT-intensive user such as the financial sector since technology risks have no borders and the financial sector deploys its services on a wide cross-border basis within and outside the Union. Individual financial entities operating on a cross-border basis or holding several authorisations (e.g. one financial entity can have a banking, an investment firm, and a payment institution licence, each issued by a different competent authority in one or several Member States) face operational challenges in addressing ICT risk and mitigating adverse impacts of ICT incidents on their own and in a coherent cost-effective way.
As the Single Rulebook has not been accompanied by a comprehensive ICT or operational risk framework, further harmonisation of key digital operational resilience requirements for all financial entities is required. The development of ICT capabilities and overall resilience by financial entities, based on those key requirements, with a view to withstanding operational outages, would help preserve the stability and integrity of the Union financial markets and thus contribute to ensuring a high level of protection of investors and consumers in the Union. Since this Regulation aims to contribute to the smooth functioning of the internal market, it should be based on the provisions of Article 114 of the Treaty on the Functioning of the European Union (TFEU) as interpreted in accordance with the consistent case law of the Court of Justice of the European Union (Court of Justice).
This Regulation aims to consolidate and upgrade ICT risk requirements as part of the operational risk requirements that have, up to this point, been addressed separately in various Union legal acts. While those acts covered the main categories of financial risk (e.g. credit risk, market risk, counterparty credit risk and liquidity risk, market conduct risk), they did not comprehensively tackle, at the time of their adoption, all components of operational resilience. The operational risk rules, when further developed in those Union legal acts, often favoured a traditional quantitative approach to addressing risk (namely setting a capital requirement to cover ICT risk) rather than targeted qualitative rules for the protection, detection, containment, recovery and repair capabilities against ICT-related incidents, or for reporting and digital testing capabilities. Those acts were primarily meant to cover and update essential rules on prudential supervision, market integrity or conduct. By consolidating and upgrading the different rules on ICT risk, all provisions addressing digital risk in the financial sector should for the first time be brought together in a consistent manner in one single legislative act. Therefore, this Regulation fills in the gaps or remedies inconsistencies in some of the prior legal acts, including in relation to the terminology used therein, and explicitly refers to ICT risk via targeted rules on ICT risk-management capabilities, incident reporting, operational resilience testing and ICT third-party risk monitoring. This Regulation should thus also raise awareness of ICT risk and acknowledge that ICT incidents and a lack of operational resilience have the possibility to jeopardise the soundness of financial entities.
Financial entities should follow the same approach and the same principle-based rules when addressing ICT risk taking into account their size and overall risk profile, and the nature, scale and complexity of their services, activities and operations. Consistency contributes to enhancing confidence in the financial system and preserving its stability especially in times of high reliance on ICT systems, platforms and infrastructures, which entails increased digital risk. Observing basic cyber hygiene should also avoid imposing heavy costs on the economy by minimising the impact and costs of ICT disruptions.
A Regulation helps reduce regulatory complexity, fosters supervisory convergence and increases legal certainty, and also contributes to limiting compliance costs, especially for financial entities operating across borders, and to reducing competitive distortions. Therefore, the choice of a Regulation for the establishment of a common framework for the digital operational resilience of financial entities is the most appropriate way to guarantee a homogenous and coherent application of all components of ICT risk management by the Union financial sector.
Directive (EU) 2016/1148 of the European Parliament and of the Council (7) was the first horizontal cybersecurity framework enacted at Union level, applying also to three types of financial entities, namely credit institutions, trading venues and central counterparties. However, since Directive (EU) 2016/1148 set out a mechanism of identification at national level of operators of essential services, only certain credit institutions, trading venues and central counterparties that were identified by the Member States, have been brought into its scope in practice, and hence required to comply with the ICT security and incident notification requirements laid down in it. Directive (EU) 2022/2555 of the European Parliament and of the Council (8) sets a uniform criterion to determine the entities falling within its scope of application (size-cap rule) while also keeping the three types of financial entities in its scope.
However, as this Regulation increases the level of harmonisation of the various digital resilience components, by introducing requirements on ICT risk management and ICT-related incident reporting that are more stringent in comparison to those laid down in the current Union financial services law, this higher level constitutes an increased harmonisation also in comparison with the requirements laid down in Directive (EU) 2022/2555. Consequently, this Regulation constitutes lex specialis with regard to Directive (EU) 2022/2555. At the same time, it is crucial to maintain a strong relationship between the financial sector and the Union horizontal cybersecurity framework as currently laid out in Directive (EU) 2022/2555 to ensure consistency with the cyber security strategies adopted by Member States and to allow financial supervisors to be made aware of cyber incidents affecting other sectors covered by that Directive.
In accordance with Article 4(2) of the Treaty on European Union and without prejudice to the judicial review by the Court of Justice, this Regulation should not affect the responsibility of Member States with regard to essential State functions concerning public security, defence and the safeguarding of national security, for example concerning the supply of information which would be contrary to the safeguarding of national security.
To enable cross-sector learning and to effectively draw on experiences of other sectors in dealing with cyber threats, the financial entities referred to in Directive (EU) 2022/2555 should remain part of the ‘ecosystem’ of that Directive (for example, Cooperation Group and computer security incident response teams (CSIRTs)).The ESAs and national competent authorities should be able to participate in the strategic policy discussions and the technical workings of the Cooperation Group under that Directive, and to exchange information and further cooperate with the single points of contact designated or established in accordance with that Directive. The competent authorities under this Regulation should also consult and cooperate with the CSIRTs. The competent authorities should also be able to request technical advice from the competent authorities designated or established in accordance with Directive (EU) 2022/2555 and establish cooperation arrangements that aim to ensure effective and fast-response coordination mechanisms.
Given the strong interlinkages between the digital resilience and the physical resilience of financial entities, a coherent approach with regard to the resilience of critical entities is necessary in this Regulation and Directive (EU) 2022/2557 of the European Parliament and the Council (9). Given that the physical resilience of financial entities is addressed in a comprehensive manner by the ICT risk management and reporting obligations covered by this Regulation, the obligations laid down in Chapters III and IV of Directive (EU) 2022/2557 should not apply to financial entities falling within the scope of that Directive.
Cloud computing service providers are one category of digital infrastructure covered by Directive (EU) 2022/2555. The Union Oversight Framework (‘Oversight Framework’) established by this Regulation applies to all critical ICT third-party service providers, including cloud computing service providers providing ICT services to financial entities, and should be considered complementary to the supervision carried out pursuant to Directive (EU) 2022/2555. Moreover, the Oversight Framework established by this Regulation should cover cloud computing service providers in the absence of a Union horizontal framework establishing a digital oversight authority.
In order to maintain full control over ICT risk, financial entities need to have comprehensive capabilities to enable a strong and effective ICT risk management, as well as specific mechanisms and policies for handling all ICT-related incidents and for reporting major ICT-related incidents. Likewise, financial entities should have policies in place for the testing of ICT systems, controls and processes, as well as for managing ICT third-party risk. The digital operational resilience baseline for financial entities should be increased while also allowing for a proportionate application of requirements for certain financial entities, particularly microenterprises, as well as financial entities subject to a simplified ICT risk management framework. To facilitate an efficient supervision of institutions for occupational retirement provision that is proportionate and addresses the need to reduce administrative burdens on the competent authorities, the relevant national supervisory arrangements in respect of such financial entities should take into account their size and overall risk profile, and the nature, scale and complexity of their services, activities and operations even when the relevant thresholds established in Article 5 of Directive (EU) 2016/2341 of the European Parliament and of the Council (10) are exceeded. In particular, supervisory activities should focus primarily on the need to address serious risks associated with the ICT risk management of a particular entity.
Competent authorities should also maintain a vigilant but proportionate approach in relation to the supervision of institutions for occupational retirement provision which, in accordance with Article 31 of Directive (EU) 2016/2341, outsource a significant part of their core business, such as asset management, actuarial calculations, accounting and data management, to service providers.
ICT-related incident reporting thresholds and taxonomies vary significantly at national level. While common ground may be achieved through the relevant work undertaken by the European Union Agency for Cybersecurity (ENISA) established by Regulation (EU) 2019/881 of the European Parliament and of the Council (11) and the Cooperation Group under Directive (EU) 2022/2555, divergent approaches on setting the thresholds and use of taxonomies still exist, or can emerge, for the remainder of financial entities. Due to those divergences, there are multiple requirements that financial entities must comply with, especially when operating across several Member States and when part of a financial group. Moreover, such divergences have the potential to hinder the creation of further uniform or centralised Union mechanisms that speed up the reporting process and support a quick and smooth exchange of information between competent authorities, which is crucial for addressing ICT risk in the event of large-scale attacks with potentially systemic consequences.
To reduce the administrative burden and potentially duplicative reporting obligations for certain financial entities, the requirement for the incident reporting pursuant to Directive (EU) 2015/2366 of the European Parliament and of the Council (12) should cease to apply to payment service providers that fall within the scope of this Regulation. Consequently, credit institutions, e-money institutions, payment institutions and account information service providers, as referred to in Article 33(1) of that Directive, should, from the date of application of this Regulation, report pursuant to this Regulation, all operational or security payment-related incidents which have been previously reported pursuant to that Directive, irrespective of whether such incidents are ICT-related.
To enable competent authorities to fulfil supervisory roles by acquiring a complete overview of the nature, frequency, significance and impact of ICT-related incidents and to enhance the exchange of information between relevant public authorities, including law enforcement authorities and resolution authorities, this Regulation should lay down a robust ICT-related incident reporting regime whereby the relevant requirements address current gaps in financial services law, and remove existing overlaps and duplications to alleviate costs. It is essential to harmonise the ICT-related incident reporting regime by requiring all financial entities to report to their competent authorities through a single streamlined framework as set out in this Regulation. In addition, the ESAs should be empowered to further specify relevant elements for the ICT-related incident reporting framework, such as taxonomy, timeframes, data sets, templates and applicable thresholds. To ensure full consistency with Directive (EU) 2022/2555, financial entities should be allowed, on a voluntary basis, to notify significant cyber threats to the relevant competent authority, when they consider that the cyber threat is of relevance to the financial system, service users or clients.
Digital operational resilience testing requirements have been developed in certain financial subsectors setting out frameworks that are not always fully aligned. This leads to a potential duplication of costs for cross-border financial entities and makes the mutual recognition of the results of digital operational resilience testing complex which, in turn, can fragment the internal market.
In addition, where no ICT testing is required, vulnerabilities remain undetected and result in exposing a financial entity to ICT risk and ultimately create a higher risk to the stability and integrity of the financial sector. Without Union intervention, digital operational resilience testing would continue to be inconsistent and would lack a system of mutual recognition of ICT testing results across different jurisdictions. In addition, as it is unlikely that other financial subsectors would adopt testing schemes on a meaningful scale, they would miss out on the potential benefits of a testing framework, in terms of revealing ICT vulnerabilities and risks, and testing defence capabilities and business continuity, which contributes to increasing the trust of customers, suppliers and business partners. To remedy those overlaps, divergences and gaps, it is necessary to lay down rules for a coordinated testing regime and thereby facilitate the mutual recognition of advanced testing for financial entities meeting the criteria set out in this Regulation.
Financial entities’ reliance on the use of ICT services is partly driven by their need to adapt to an emerging competitive digital global economy, to boost their business efficiency and to meet consumer demand. The nature and extent of such reliance has been continuously evolving in recent years, driving cost reduction in financial intermediation, enabling business expansion and scalability in the deployment of financial activities while offering a wide range of ICT tools to manage complex internal processes.
The extensive use of ICT services is evidenced by complex contractual arrangements, whereby financial entities often encounter difficulties in negotiating contractual terms that are tailored to the prudential standards or other regulatory requirements to which they are subject, or otherwise in enforcing specific rights, such as access or audit rights, even when the latter are enshrined in their contractual arrangements. Moreover, many of those contractual arrangements do not provide for sufficient safeguards allowing for the fully-fledged monitoring of subcontracting processes, thus depriving the financial entity of its ability to assess the associated risks. In addition, as ICT third-party service providers often provide standardised services to different types of clients, such contractual arrangements do not always cater adequately for the individual or specific needs of financial industry actors.
Even though Union financial services law contains certain general rules on outsourcing, monitoring of the contractual dimension is not fully anchored into Union law. In the absence of clear and bespoke Union standards applying to the contractual arrangements concluded with ICT third-party service providers, the external source of ICT risk is not comprehensively addressed. Consequently, it is necessary to set out certain key principles to guide financial entities’ management of ICT third-party risk, which are of particular importance when financial entities resort to ICT third-party service providers to support their critical or important functions. Those principles should be accompanied by a set of core contractual rights in relation to several elements in the performance and termination of contractual arrangements with a view to providing certain minimum safeguards in order to strengthen financial entities’ ability to effectively monitor all ICT risk emerging at the level of third-party service providers. Those principles are complementary to the sectoral law applicable to outsourcing.
A certain lack of homogeneity and convergence regarding the monitoring of ICT third-party risk and ICT third-party dependencies is evident today. Despite efforts to address outsourcing, such as EBA Guidelines on outsourcing of 2019 and ESMA Guidelines on outsourcing to cloud service providers of 2021 the broader issue of counteracting systemic risk which may be triggered by the financial sector’s exposure to a limited number of critical ICT third-party service providers is not sufficiently addressed by Union law. The lack of rules at Union level is compounded by the absence of national rules on mandates and tools that allow financial supervisors to acquire a good understanding of ICT third-party dependencies and to monitor adequately risks arising from the concentration of ICT third-party dependencies.
Taking into account the potential systemic risk entailed by increased outsourcing practices and by the ICT third-party concentration, and mindful of the insufficiency of national mechanisms in providing financial supervisors with adequate tools to quantify, qualify and redress the consequences of ICT risk occurring at critical ICT third-party service providers, it is necessary to establish an appropriate Oversight Framework allowing for a continuous monitoring of the activities of ICT third-party service providers that are critical ICT third-party service providers to financial entities, while ensuring that the confidentiality and security of customers other than financial entities is preserved. While intra-group provision of ICT services entails specific risks and benefits, it should not be automatically considered less risky than the provision of ICT services by providers outside of a financial group and should therefore be subject to the same regulatory framework. However, when ICT services are provided from within the same financial group, financial entities might have a higher level of control over intra-group providers, which ought to be taken into account in the overall risk assessment.
With ICT risk becoming more and more complex and sophisticated, good measures for the detection and prevention of ICT risk depend to a great extent on the regular sharing between financial entities of threat and vulnerability intelligence. Information sharing contributes to creating increased awareness of cyber threats. In turn, this enhances the capacity of financial entities to prevent cyber threats from becoming real ICT-related incidents and enables financial entities to more effectively contain the impact of ICT-related incidents and to recover faster. In the absence of guidance at Union level, several factors seem to have inhibited such intelligence sharing, in particular uncertainty about its compatibility with data protection, anti-trust and liability rules.
In addition, doubts about the type of information that can be shared with other market participants, or with non-supervisory authorities (such as ENISA, for analytical input, or Europol, for law enforcement purposes) lead to useful information being withheld. Therefore, the extent and quality of information sharing currently remains limited and fragmented, with relevant exchanges mostly being local (by way of national initiatives) and with no consistent Union-wide information-sharing arrangements tailored to the needs of an integrated financial system. It is therefore important to strengthen those communication channels.
Financial entities should be encouraged to exchange among themselves cyber threat information and intelligence, and to collectively leverage their individual knowledge and practical experience at strategic, tactical and operational levels with a view to enhancing their capabilities to adequately assess, monitor, defend against, and respond to cyber threats, by participating in information sharing arrangements. It is therefore necessary to enable the emergence at Union level of mechanisms for voluntary information-sharing arrangements which, when conducted in trusted environments, would help the community of the financial industry to prevent and collectively respond to cyber threats by quickly limiting the spread of ICT risk and impeding potential contagion throughout the financial channels. Those mechanisms should comply with the applicable competition law rules of the Union set out in the Communication from the Commission of 14 January 2011 entitled ‘Guidelines on the applicability of Article 101 of the Treaty on the Functioning of the European Union to horizontal cooperation agreements’, as well as with Union data protection rules, in particular Regulation (EU) 2016/679 of the European Parliament and of the Council (13). They should operate based on the use of one or more of the legal bases that are laid down in Article 6 of that Regulation, such as in the context of the processing of personal data that is necessary for the purposes of the legitimate interest pursued by the controller or by a third party, as referred to in Article 6(1), point (f), of that Regulation, as well as in the context of the processing of personal data necessary for compliance with a legal obligation to which the controller is subject, necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller, as referred to in Article 6(1), points (c) and (e), respectively, of that Regulation.
In order to maintain a high level of digital operational resilience for the whole financial sector, and at the same time to keep pace with technological developments, this Regulation should address risk stemming from all types of ICT services. To that end, the definition of ICT services in the context of this Regulation should be understood in a broad manner, encompassing digital and data services provided through ICT systems to one or more internal or external users on an ongoing basis. That definition should, for instance, include so called ‘over the top’ services, which fall within the category of electronic communications services. It should exclude only the limited category of traditional analogue telephone services qualifying as Public Switched Telephone Network (PSTN) services, landline services, Plain Old Telephone Service (POTS), or fixed-line telephone services.
Notwithstanding the broad coverage envisaged by this Regulation, the application of the digital operational resilience rules should take into account the significant differences between financial entities in terms of their size and overall risk profile. As a general principle, when distributing resources and capabilities for the implementation of the ICT risk management framework, financial entities should duly balance their ICT-related needs to their size and overall risk profile, and the nature, scale and complexity of their services, activities and operations, while competent authorities should continue to assess and review the approach of such distribution.
Account information service providers, referred to in Article 33(1) of Directive (EU) 2015/2366, are explicitly included in the scope of this Regulation, taking into account the specific nature of their activities and the risks arising therefrom. In addition, electronic money institutions and payment institutions exempted pursuant to Article 9(1) of Directive 2009/110/EC of the European Parliament and of the Council (14) and Article 32(1) of Directive (EU) 2015/2366 are included in the scope of this Regulation even if they have not been granted authorisation in accordance Directive 2009/110/EC to issue electronic money, or if they have not been granted authorisation in accordance with Directive (EU) 2015/2366 to provide and execute payment services. However, post office giro institutions, referred to in Article 2(5), point (3), of Directive 2013/36/EU of the European Parliament and of the Council (15), are excluded from the scope of this Regulation. The competent authority for payment institutions exempted pursuant to Directive (EU) 2015/2366, electronic money institutions exempted pursuant to Directive 2009/110/EC and account information service providers as referred to in Article 33(1) of Directive (EU) 2015/2366, should be the competent authority designated in accordance with Article 22 of Directive (EU) 2015/2366.
As larger financial entities might enjoy wider resources and can swiftly deploy funds to develop governance structures and set up various corporate strategies, only financial entities that are not microenterprises in the sense of this Regulation should be required to establish more complex governance arrangements. Such entities are better equipped in particular to set up dedicated management functions for supervising arrangements with ICT third-party service providers or for dealing with crisis management, to organise their ICT risk management according to the three lines of defence model, or to set up an internal risk management and control model, and to submit their ICT risk management framework to internal audits.
Some financial entities benefit from exemptions or are subject to a very light regulatory framework under the relevant sector-specific Union law. Such financial entities include managers of alternative investment funds referred to in Article 3(2) of Directive 2011/61/EU of the European Parliament and of the Council (16), insurance and reinsurance undertakings referred to in Article 4 of Directive 2009/138/EC of the European Parliament and of the Council (17), and institutions for occupational retirement provision which operate pension schemes which together do not have more than 15 members in total. In light of those exemptions it would not be proportionate to include such financial entities in the scope of this Regulation. In addition, this Regulation acknowledges the specificities of the insurance intermediation market structure, with the result that insurance intermediaries, reinsurance intermediaries and ancillary insurance intermediaries qualifying as microenterprises or as small or medium-sized enterprises should not be subject to this Regulation.
Since the entities referred to in Article 2(5), points (4) to (23), of Directive 2013/36/EU are excluded from the scope of that Directive, Member States should consequently be able to choose to exempt from the application of this Regulation such entities located within their respective territories.
Similarly, in order to align this Regulation to the scope of Directive 2014/65/EU of the European Parliament and of the Council (18), it is also appropriate to exclude from the scope of this Regulation natural and legal persons referred in Articles 2 and 3 of that Directive which are allowed to provide investment services without having to obtain an authorisation under Directive 2014/65/EU. However, Article 2 of Directive 2014/65/EU also excludes from the scope of that Directive entities which qualify as financial entities for the purposes of this Regulation such as, central securities depositories, collective investment undertakings or insurance and reinsurance undertakings. The exclusion from the scope of this Regulation of the persons and entities referred to in Articles 2 and 3 of that Directive should not encompass those central securities depositories, collective investment undertakings or insurance and reinsurance undertakings.
Under sector-specific Union law, some financial entities are subject to lighter requirements or exemptions for reasons associated with their size or the services they provide. That category of financial entities includes small and non-interconnected investment firms, small institutions for occupational retirement provision which may be excluded from the scope of Directive (EU) 2016/2341 under the conditions laid down in Article 5 of that Directive by the Member State concerned and operate pension schemes which together do not have more than 100 members in total, as well as institutions exempted pursuant to Directive 2013/36/EU. Therefore, in accordance with the principle of proportionality and to preserve the spirit of sector-specific Union law, it is also appropriate to subject those financial entities to a simplified ICT risk management framework under this Regulation. The proportionate character of the ICT risk management framework covering those financial entities should not be altered by the regulatory technical standards that are to be developed by the ESAs. Moreover, in accordance with the principle of proportionality, it is appropriate to also subject payment institutions referred to in Article 32(1) of Directive (EU) 2015/2366 and electronic money institutions referred to in Article 9 of Directive 2009/110/EC exempted in accordance with national law transposing those Union legal acts to a simplified ICT risk management framework under this Regulation, while payment institutions and electronic money institutions which have not been exempted in accordance with their respective national law transposing sectoral Union law should comply with the general framework laid down by this Regulation.
Similarly, financial entities which qualify as microenterprises or are subject to the simplified ICT risk management framework under this Regulation should not be required to establish a role to monitor their arrangements concluded with ICT third-party service providers on the use of ICT services; or to designate a member of senior management to be responsible for overseeing the related risk exposure and relevant documentation; to assign the responsibility for managing and overseeing ICT risk to a control function and ensure an appropriate level of independence of such control function in order to avoid conflicts of interest; to document and review at least once a year the ICT risk management framework; to subject to internal audit on a regular basis the ICT risk management framework; to perform in-depth assessments after major changes in their network and information system infrastructures and processes; to regularly conduct risk analyses on legacy ICT systems; to subject the implementation of the ICT Response and Recovery plans to independent internal audit reviews; to have a crisis management function, to expand the testing of business continuity and response and recovery plans to capture switchover scenarios between primary ICT infrastructure and redundant facilities; to report to competent authorities, upon their request, an estimation of aggregated annual costs and losses caused by major ICT-related incidents, to maintain redundant ICT capacities; to communicate to national competent authorities implemented changes following post ICT-related incident reviews; to monitor on a continuous basis relevant technological developments, to establish a comprehensive digital operational resilience testing programme as an integral part of the ICT risk management framework provided for in this Regulation, or to adopt and regularly review a strategy on ICT third-party risk. In addition, microenterprises should only be required to assess the need to maintain such redundant ICT capacities based on their risk profile. Microenterprises should benefit from a more flexible regime as regards digital operational resilience testing programmes. When considering the type and frequency of testing to be performed, they should properly balance the objective of maintaining a high digital operational resilience, the available resources and their overall risk profile. Microenterprises and financial entities subject to the simplified ICT risk management framework under this Regulation should be exempted from the requirement to perform advanced testing of ICT tools, systems and processes based on threat-led penetration testing (TLPT), as only financial entities meeting the criteria set out in this Regulation should be required to carry out such testing. In light of their limited capabilities, microenterprises should be able to agree with the ICT third-party service provider to delegate the financial entity’s rights of access, inspection and audit to an independent third-party, to be appointed by the ICT third-party service provider, provided that the financial entity is able to request, at any time, all relevant information and assurance on the ICT third-party service provider’s performance from the respective independent third-party.
As only those financial entities identified for the purposes of the advanced digital resilience testing should be required to conduct threat-led penetration tests, the administrative processes and financial costs entailed in the performance of such tests should be borne by a small percentage of financial entities.
To ensure full alignment and overall consistency between financial entities’ business strategies, on the one hand, and the conduct of ICT risk management, on the other hand, the financial entities’ management bodies should be required to maintain a pivotal and active role in steering and adapting the ICT risk management framework and the overall digital operational resilience strategy. The approach to be taken by management bodies should not only focus on the means of ensuring the resilience of the ICT systems, but should also cover people and processes through a set of policies which cultivate, at each corporate layer, and for all staff, a strong sense of awareness about cyber risks and a commitment to observe a strict cyber hygiene at all levels. The ultimate responsibility of the management body in managing a financial entity’s ICT risk should be an overarching principle of that comprehensive approach, further translated into the continuous engagement of the management body in the control of the monitoring of the ICT risk management.
Moreover, the principle of the management body’s full and ultimate responsibility for the management of the ICT risk of the financial entity goes hand in hand with the need to secure a level of ICT-related investments and an overall budget for the financial entity that would enable the financial entity to achieve a high level of digital operational resilience.
Inspired by relevant international, national and industry best practices, guidelines, recommendations and approaches to the management of cyber risk, this Regulation promotes a set of principles that facilitate the overall structure of ICT risk management. Consequently, as long as the main capabilities which financial entities put in place address the various functions in the ICT risk management (identification, protection and prevention, detection, response and recovery, learning and evolving and communication) set out in this Regulation, financial entities should remain free to use ICT risk management models that are differently framed or categorised.
To keep pace with an evolving cyber threat landscape, financial entities should maintain updated ICT systems that are reliable and capable, not only for guaranteeing the processing of data required for their services, but also for ensuring sufficient technological resilience to allow them to deal adequately with additional processing needs due to stressed market conditions or other adverse situations.
Efficient business continuity and recovery plans are necessary to allow financial entities to promptly and quickly resolve ICT-related incidents, in particular cyber-attacks, by limiting damage and giving priority to the resumption of activities and recovery actions in accordance with their back-up policies. However, such resumption should in no way jeopardise the integrity and security of the network and information systems or the availability, authenticity, integrity or confidentiality of data.
While this Regulation allows financial entities to determine their recovery time and recovery point objectives in a flexible manner and hence to set such objectives by fully taking into account the nature and the criticality of the relevant functions and any specific business needs, it should nevertheless require them to carry out an assessment of the potential overall impact on market efficiency when determining such objectives.
The propagators of cyber-attacks tend to pursue financial gains directly at the source, thus exposing financial entities to significant consequences. To prevent ICT systems from losing integrity or becoming unavailable, and hence to avoid data breaches and damage to physical ICT infrastructure, the reporting of major ICT-related incidents by financial entities should be significantly improved and streamlined. ICT-related incident reporting should be harmonised through the introduction of a requirement for all financial entities to report directly to their relevant competent authorities. Where a financial entity is subject to supervision by more than one national competent authority, Member States should designate a single competent authority as the addressee of such reporting. Credit institutions classified as significant in accordance with Article 6(4) of Council Regulation (EU) No 1024/2013 (19) should submit such reporting to the national competent authorities, which should subsequently transmit the report to the European Central Bank (ECB).
The direct reporting should enable financial supervisors to have immediate access to information about major ICT-related incidents. Financial supervisors should in turn pass on details of major ICT-related incidents to public non-financial authorities (such as competent authorities and single points of contact under Directive (EU) 2022/2555, national data protection authorities, and to law enforcement authorities for major ICT-related incidents of a criminal nature) in order to enhance such authorities awareness of such incidents and, in the case of CSIRTs, to facilitate prompt assistance that may be given to financial entities, as appropriate. Member States should, in addition, be able to determine that financial entities themselves should provide such information to public authorities outside the financial services area. Those information flows should allow financial entities to swiftly benefit from any relevant technical input, advice about remedies, and subsequent follow-up from such authorities. The information on major ICT-related incidents should be mutually channelled: financial supervisors should provide all necessary feedback or guidance to the financial entity, while the ESAs should share anonymised data on cyber threats and vulnerabilities relating to an incident, to aid wider collective defence.
While all financial entities should be required to carry out incident reporting, that requirement is not expected to affect all of them in the same manner. Indeed, relevant materiality thresholds, as well as reporting timelines, should be duly adjusted, in the context of delegated acts based on the regulatory technical standards to be developed by the ESAs, with a view to covering only major ICT-related incidents. In addition, the specificities of financial entities should be taken into account when setting timelines for reporting obligations.
This Regulation should require credit institutions, payment institutions, account information service providers and electronic money institutions to report all operational or security payment-related incidents — previously reported under Directive (EU) 2015/2366 — irrespective of the ICT nature of the incident.
The ESAs should be tasked with assessing the feasibility and conditions for a possible centralisation of ICT-related incident reports at Union level. Such centralisation could consist of a single EU Hub for major ICT-related incident reporting either directly receiving relevant reports and automatically notifying national competent authorities, or merely centralising relevant reports forwarded by the national competent authorities and thus fulfilling a coordination role. The ESAs should be tasked with preparing, in consultation with the ECB and ENISA, a joint report exploring the feasibility of setting up a single EU Hub.
In order to achieve a high level of digital operational resilience, and in line with both the relevant international standards (e.g. the G7 Fundamental Elements for Threat-Led Penetration Testing) and with the frameworks applied in the Union, such as the TIBER-EU, financial entities should regularly test their ICT systems and staff having ICT-related responsibilities with regard to the effectiveness of their preventive, detection, response and recovery capabilities, to uncover and address potential ICT vulnerabilities. To reflect differences that exist across, and within, the various financial subsectors as regards financial entities’ level of cybersecurity preparedness, testing should include a wide variety of tools and actions, ranging from the assessment of basic requirements (e.g. vulnerability assessments and scans, open source analyses, network security assessments, gap analyses, physical security reviews, questionnaires and scanning software solutions, source code reviews where feasible, scenario-based tests, compatibility testing, performance testing or end-to-end testing) to more advanced testing by means of TLPT. Such advanced testing should be required only of financial entities that are mature enough from an ICT perspective to reasonably carry it out. The digital operational resilience testing required by this Regulation should thus be more demanding for those financial entities meeting the criteria set out in this Regulation (for example, large, systemic and ICT-mature credit institutions, stock exchanges, central securities depositories and central counterparties) than for other financial entities. At the same time, the digital operational resilience testing by means of TLPT should be more relevant for financial entities operating in core financial services subsectors and playing a systemic role (for example, payments, banking, and clearing and settlement), and less relevant for other subsectors (for example, asset managers and credit rating agencies).
Financial entities involved in cross-border activities and exercising the freedoms of establishment, or of provision of services within the Union, should comply with a single set of advanced testing requirements (i.e. TLPT) in their home Member State, which should include the ICT infrastructures in all jurisdictions where the cross-border financial group operates within the Union, thus allowing such cross-border financial groups to incur related ICT testing costs in one jurisdiction only.
To draw on the expertise already acquired by certain competent authorities, in particular with regard to implementing the TIBER-EU framework, this Regulation should allow Member States to designate a single public authority as responsible in the financial sector, at national level, for all TLPT matters, or competent authorities, to delegate, in the absence of such designation, the exercise of TLPT related tasks to another national financial competent authority.
Since this Regulation does not require financial entities to cover all critical or important functions in one single threat-led penetration test, financial entities should be free to determine which and how many critical or important functions should be included in the scope of such a test.
Pooled testing within the meaning of this Regulation — involving the participation of several financial entities in a TLPT and for which an ICT third-party service provider can directly enter into contractual arrangements with an external tester — should be allowed only where the quality or security of services delivered by the ICT third-party service provider to customers that are entities falling outside the scope of this Regulation, or the confidentiality of the data related to such services, are reasonably expected to be adversely impacted. Pooled testing should also be subject to safeguards (direction by one designated financial entity, calibration of the number of participating financial entities) to ensure a rigorous testing exercise for the financial entities involved which meet the objectives of the TLPT pursuant to this Regulation.
In order to take advantage of internal resources available at corporate level, this Regulation should allow the use of internal testers for the purposes of carrying out TLPT, provided there is supervisory approval, no conflicts of interest, and periodical alternation of the use of internal and external testers (every three tests), while also requiring the provider of the threat intelligence in the TLPT to always be external to the financial entity. The responsibility for conducting TLPT should remain fully with the financial entity. Attestations provided by authorities should be solely for the purpose of mutual recognition and should not preclude any follow-up action needed to address the ICT risk to which the financial entity is exposed, nor should they be seen as a supervisory endorsement of a financial entity’s ICT risk management and mitigation capabilities.
To ensure a sound monitoring of ICT third-party risk in the financial sector, it is necessary to lay down a set of principle-based rules to guide financial entities’ when monitoring risk arising in the context of functions outsourced to ICT third-party service providers, particularly for ICT services supporting critical or important functions, as well as more generally in the context of all ICT third-party dependencies.
To address the complexity of the various sources of ICT risk, while taking into account the multitude and diversity of providers of technological solutions which enable a smooth provision of financial services, this Regulation should cover a wide range of ICT third-party service providers, including providers of cloud computing services, software, data analytics services and providers of data centre services. Similarly, since financial entities should effectively and coherently identify and manage all types of risk, including in the context of ICT services procured within a financial group, it should be clarified that undertakings which are part of a financial group and provide ICT services predominantly to their parent undertaking, or to subsidiaries or branches of their parent undertaking, as well as financial entities providing ICT services to other financial entities, should also be considered as ICT third-party service providers under this Regulation. Lastly, in light of the evolving payment services market becoming increasingly dependent on complex technical solutions, and in view of emerging types of payment services and payment-related solutions, participants in the payment services ecosystem, providing payment-processing activities, or operating payment infrastructures, should also be considered to be ICT third-party service providers under this Regulation, with the exception of central banks when operating payment or securities settlement systems, and public authorities when providing ICT related services in the context of fulfilling State functions.
A financial entity should at all times remain fully responsible for complying with its obligations set out in this Regulation. Financial entities should apply a proportionate approach to the monitoring of risks emerging at the level of the ICT third-party service providers, by duly considering the nature, scale, complexity and importance of their ICT-related dependencies, the criticality or importance of the services, processes or functions subject to the contractual arrangements and, ultimately, on the basis of a careful assessment of any potential impact on the continuity and quality of financial services at individual and at group level, as appropriate.
The conduct of such monitoring should follow a strategic approach to ICT third-party risk formalised through the adoption by the financial entity’s management body of a dedicated ICT third-party risk strategy, rooted in a continuous screening of all ICT third-party dependencies. To enhance supervisory awareness of ICT third-party dependencies, and with a view to further supporting the work in the context of the Oversight Framework established by this Regulation, all financial entities should be required to maintain a register of information with all contractual arrangements about the use of ICT services provided by ICT third-party service providers. Financial supervisors should be able to request the full register, or to ask for specific sections thereof, and thus to obtain essential information for acquiring a broader understanding of the ICT dependencies of financial entities.
A thorough pre-contracting analysis should underpin and precede the formal conclusion of contractual arrangements, in particular by focusing on elements such as the criticality or importance of the services supported by the envisaged ICT contract, the necessary supervisory approvals or other conditions, the possible concentration risk entailed, as well as applying due diligence in the process of selection and assessment of ICT third-party service providers and assessing potential conflicts of interest. For contractual arrangements concerning critical or important functions, financial entities should take into consideration the use by ICT third-party service providers of the most up-to-date and highest information security standards. Termination of contractual arrangements could be prompted at least by a series of circumstances showing shortfalls at the ICT third-party service provider level, in particular significant breaches of laws or contractual terms, circumstances revealing a potential alteration of the performance of the functions provided for in the contractual arrangements, evidence of weaknesses of the ICT third-party service provider in its overall ICT risk management, or circumstances indicating the inability of the relevant competent authority to effectively supervise the financial entity.
To address the systemic impact of ICT third-party concentration risk, this Regulation promotes a balanced solution by means of taking a flexible and gradual approach to such concentration risk since the imposition of any rigid caps or strict limitations might hinder the conduct of business and restrain the contractual freedom. Financial entities should thoroughly assess their envisaged contractual arrangements to identify the likelihood of such risk emerging, including by means of in-depth analyses of subcontracting arrangements, in particular when concluded with ICT third-party service providers established in a third country. At this stage, and with a view to striking a fair balance between the imperative of preserving contractual freedom and that of guaranteeing financial stability, it is not considered appropriate to set out rules on strict caps and limits to ICT third-party exposures. In the context of the Oversight Framework, a Lead Overseer, appointed pursuant to this Regulation, should, in respect to critical ICT third-party service providers, pay particular attention to fully grasp the magnitude of interdependences, discover specific instances where a high degree of concentration of critical ICT third-party service providers in the Union is likely to put a strain on the Union financial system’s stability and integrity and maintain a dialogue with critical ICT third-party service providers where that specific risk is identified.
To evaluate and monitor on a regular basis the ability of an ICT third party service provider to securely provide services to a financial entity without adverse effects on a financial entity’s digital operational resilience, several key contractual elements with ICT third-party service providers should be harmonised. Such harmonisation should cover minimum areas which are crucial for enabling a full monitoring by the financial entity of the risks that could emerge from the ICT third-party service provider, from the perspective of a financial entity’s need to secure its digital resilience because it is deeply dependent on the stability, functionality, availability and security of the ICT services received.
When renegotiating contractual arrangements to seek alignment with the requirements of this Regulation, financial entities and ICT third-party service providers should ensure the coverage of the key contractual provisions as provided for in this Regulation.
The definition of ‘critical or important function’ provided for in this Regulation encompasses the ‘critical functions’ as defined in Article 2(1), point (35), of Directive 2014/59/EU of the European Parliament and of the Council (20). Accordingly, functions deemed to be critical pursuant to Directive 2014/59/EU are included in the definition of critical functions within the meaning of this Regulation.
Irrespective of the criticality or importance of the function supported by the ICT services, contractual arrangements should, in particular, provide for a specification of the complete descriptions of functions and services, of the locations where such functions are provided and where data is to be processed, as well as an indication of service level descriptions. Other essential elements to enable a financial entity’s monitoring of ICT third party risk are: contractual provisions specifying how the accessibility, availability, integrity, security and protection of personal data are ensured by the ICT third-party service provider, provisions laying down the relevant guarantees for enabling the access, recovery and return of data in the case of insolvency, resolution or discontinuation of the business operations of the ICT third-party service provider, as well as provisions requiring the ICT third-party service provider to provide assistance in case of ICT incidents in connection with the services provided, at no additional cost or at a cost determined ex-ante; provisions on the obligation of the ICT third-party service provider to fully cooperate with the competent authorities and resolution authorities of the financial entity; and provisions on termination rights and related minimum notice periods for the termination of the contractual arrangements, in accordance with the expectations of competent authorities and resolution authorities.
In addition to such contractual provisions, and with a view to ensuring that financial entities remain in full control of all developments occurring at third-party level which may impair their ICT security, the contracts for the provision of ICT services supporting critical or important functions should also provide for the following: the specification of the full service level descriptions, with precise quantitative and qualitative performance targets, to enable without undue delay appropriate corrective actions when the agreed service levels are not met; the relevant notice periods and reporting obligations of the ICT third-party service provider in the event of developments with a potential material impact on the ICT third-party service provider’s ability to effectively provide their respective ICT services; a requirement upon the ICT third-party service provider to implement and test business contingency plans and have ICT security measures, tools and policies allowing for the secure provision of services, and to participate and fully cooperate in the TLPT carried out by the financial entity.
Contracts for the provision of ICT services supporting critical or important functions should also contain provisions enabling the rights of access, inspection and audit by the financial entity, or an appointed third party, and the right to take copies as crucial instruments in the financial entities’ ongoing monitoring of the ICT third-party service provider’s performance, coupled with the service provider’s full cooperation during inspections. Similarly, the competent authority of the financial entity should have the right, based on notices, to inspect and audit the ICT third-party service provider, subject to the protection of confidential information.
Such contractual arrangements should also provide for dedicated exit strategies to enable, in particular, mandatory transition periods during which ICT third-party service providers should continue providing the relevant services with a view to reducing the risk of disruptions at the level of the financial entity, or to allow the latter effectively to switch to the use of other ICT third-party service providers or, alternatively, to change to in-house solutions, consistent with the complexity of the provided ICT service. Moreover, financial entities within the scope of Directive 2014/59/EU should ensure that the relevant contracts for ICT services are robust and fully enforceable in the event of resolution of those financial entities. Therefore, in line with the expectations of the resolution authorities, those financial entities should ensure that the relevant contracts for ICT services are resolution resilient. As long as they continue meeting their payment obligations, those financial entities should ensure, among other requirements, that the relevant contracts for ICT services contain clauses for non-termination, non-suspension and non-modification on grounds of restructuring or resolution.
Moreover, the voluntary use of standard contractual clauses developed by public authorities or Union institutions, in particular the use of contractual clauses developed by the Commission for cloud computing services could provide further comfort to the financial entities and ICT third-party service providers, by enhancing their level of legal certainty regarding the use of cloud computing services in the financial sector, in full alignment with the requirements and expectations set out by the Union financial services law. The development of standard contractual clauses builds on measures already envisaged in the 2018 Fintech Action Plan that announced the Commission’s intention to encourage and facilitate the development of standard contractual clauses for the use of cloud computing services outsourcing by financial entities, drawing on cross-sectorial cloud computing services stakeholders’ efforts, which the Commission has facilitated with the help of the financial sector’s involvement.
With a view to promoting convergence and efficiency in relation to supervisory approaches when addressing ICT third-party risk in the financial sector, as well as to strengthening the digital operational resilience of financial entities which rely on critical ICT third-party service providers for the provision of ICT services that support the supply of financial services, and thereby to contributing to the preservation of the Union’s financial system stability and the integrity of the internal market for financial services, critical ICT third-party service providers should be subject to a Union Oversight Framework. While the set-up of the Oversight Framework is justified by the added value of taking action at Union level and by virtue of the inherent role and specificities of the use of ICT services in the provision of financial services, it should be recalled, at the same time, that this solution appears suitable only in the context of this Regulation specifically dealing with digital operational resilience in the financial sector. However, such Oversight Framework should not be regarded as a new model for Union supervision in other areas of financial services and activities.
The Oversight Framework should apply only to critical ICT third-party service providers. There should therefore be a designation mechanism to take into account the dimension and nature of the financial sector’s reliance on such ICT third-party service providers. That mechanism should involve a set of quantitative and qualitative criteria to set the criticality parameters as a basis for inclusion in the Oversight Framework. In order to ensure the accuracy of that assessment, and regardless of the corporate structure of the ICT third-party service provider, such criteria should, in the case of a ICT third-party service provider that is part of a wider group, take into consideration the entire ICT third-party service provider’s group structure. On the one hand, critical ICT third-party service providers, which are not automatically designated by virtue of the application of those criteria, should have the possibility to opt in to the Oversight Framework on a voluntary basis, on the other hand, ICT third-party service providers, that are already subject to oversight mechanism frameworks supporting the fulfilment of the tasks of the European System of Central Banks as referred to in Article 127(2) TFEU, should be exempted.
Similarly, financial entities providing ICT services to other financial entities, while belonging to the category of ICT third-party service providers under this Regulation, should also be exempted from the Oversight Framework since they are already subject to supervisory mechanisms established by the relevant Union financial services law. Where applicable, competent authorities should take into account, in the context of their supervisory activities, the ICT risk posed to financial entities by financial entities providing ICT services. Likewise, due to the existing risk monitoring mechanisms at group level, the same exemption should be introduced for ICT third-party service providers delivering services predominantly to the entities of their own group. ICT third-party service providers providing ICT services solely in one Member State to financial entities that are active only in that Member State should also be exempted from the designation mechanism because of their limited activities and lack of cross-border impact.
The digital transformation experienced in financial services has brought about an unprecedented level of use of, and reliance upon, ICT services. Since it has become inconceivable to provide financial services without the use of cloud computing services, software solutions and data-related services, the Union financial ecosystem has become intrinsically co-dependent on certain ICT services provided by ICT service suppliers. Some of those suppliers, innovators in developing and applying ICT-based technologies, play a significant role in the delivery of financial services, or have become integrated into the financial services value chain. They have thus become critical to the stability and integrity of the Union financial system. This widespread reliance on services supplied by critical ICT third-party service providers, combined with the interdependence of the information systems of various market operators, create a direct, and potentially severe, risk to the Union financial services system and to the continuity of delivery of financial services if critical ICT third-party service providers were to be affected by operational disruptions or major cyber incidents. Cyber incidents have a distinctive ability to multiply and propagate throughout the financial system at a considerably faster pace than other types of risk monitored in the financial sector and can extend across sectors and beyond geographical borders. They have the potential to evolve into a systemic crisis, where trust in the financial system has been eroded due to the disruption of functions supporting the real economy, or to substantial financial losses, reaching a level which the financial system is unable to withstand, or which requires the deployment of heavy shock absorption measures. To prevent these scenarios from taking place and thereby endangering the financial stability and integrity of the Union, it is essential to provide the convergence of supervisory practices relating to ICT third-party risk in finance, in particular through new rules enabling the Union oversight of critical ICT third-party service providers.
The Oversight Framework largely depends on the degree of collaboration between the Lead Overseer and the critical ICT third-party service provider delivering to financial entities services affecting the supply of financial services. Successful oversight is predicated, inter alia, upon the ability of the Lead Overseer to effectively conduct monitoring missions and inspections to assess the rules, controls and processes used by the critical ICT third-party service providers, as well as to assess the potential cumulative impact of their activities on financial stability and the integrity of the financial system. At the same time, it is crucial that critical ICT third-party service providers follow the Lead Overseer’s recommendations and address its concerns. Since a lack of cooperation by a critical ICT third-party service provider providing services that affect the supply of financial services, such as the refusal to grant access to its premises or to submit information, would ultimately deprive the Lead Overseer of its essential tools in appraising ICT third-party risk, and could adversely impact the financial stability and the integrity of the financial system, it is necessary to also provide for a commensurate sanctioning regime.
Against this background, the need of the Lead Overseer to impose penalty payments to compel critical ICT third-party service providers to comply with the transparency and access-related obligations set out in this Regulation should not be jeopardised by difficulties raised by the enforcement of those penalty payments in relation to critical ICT third-party service providers established in third countries. In order to ensure the enforceability of such penalties, and to allow a swift roll out of procedures upholding the critical ICT third-party service providers’ rights of defence in the context of the designation mechanism and the issuance of recommendations, those critical ICT third-party service providers, providing services to financial entities that affect the supply of financial services, should be required to maintain an adequate business presence in the Union. Due to the nature of the oversight, and the absence of comparable arrangements in other jurisdictions, there are no suitable alternative mechanisms ensuring this objective by way of effective cooperation with financial supervisors in third countries in relation to the monitoring of the impact of digital operational risks posed by systemic ICT third-party service providers, qualifying as critical ICT third-party service providers established in third countries. Therefore, in order to continue its provision of ICT services to financial entities in the Union, an ICT third-party service provider established in a third country which has been designated as critical in accordance with this Regulation should undertake, within 12 months of such designation, all necessary arrangements to ensure its incorporation within the Union, by means of establishing a subsidiary, as defined throughout the Union acquis, namely in Directive 2013/34/EU of the European Parliament and of the Council (21).
The requirement to set up a subsidiary in the Union should not prevent the critical ICT third-party service provider from supplying ICT services and related technical support from facilities and infrastructure located outside the Union. This Regulation does not impose a data localisation obligation as it does not require data storage or processing to be undertaken in the Union.
Critical ICT third-party service providers should be able to provide ICT services from anywhere in the world, not necessarily or not only from premises located in the Union. Oversight activities should be first conducted on premises located in the Union and by interacting with entities located in the Union, including the subsidiaries established by critical ICT third-party service providers pursuant to this Regulation. However, such actions within the Union might be insufficient to allow the Lead Overseer to fully and effectively perform its duties under this Regulation. The Lead Overseer should therefore also be able to exercise its relevant oversight powers in third countries. Exercising those powers in third countries should allow the Lead Overseer to examine the facilities from which the ICT services or the technical support services are actually provided or managed by the critical ICT third-party service provider, and should give the Lead Overseer a comprehensive and operational understanding of the ICT risk management of the critical ICT third-party service provider. The possibility for the Lead Overseer, as a Union agency, to exercise powers outside the territory of the Union should be duly framed by relevant conditions, in particular the consent of the critical ICT third-party service provider concerned. Similarly, the relevant authorities of the third country should be informed of, and not have objected to, the exercise on their own territory of the activities of the Lead Overseer. However, in order to ensure efficient implementation, and without prejudice to the respective competences of the Union institutions and the Member States, such powers also need to be fully anchored in the conclusion of administrative cooperation arrangements with the relevant authorities of the third country concerned. This Regulation should therefore enable the ESAs to conclude administrative cooperation arrangements with the relevant authorities of third countries, which should not otherwise create legal obligations in respect of the Union and its Member States.
To facilitate communication with the Lead Overseer and to ensure adequate representation, critical ICT third-party service providers which are part of a group should designate one legal person as their coordination point.
The Oversight Framework should be without prejudice to Member States’ competence to conduct their own oversight or monitoring missions in respect to ICT third-party service providers which are not designated as critical under this Regulation, but which are regarded as important at national level.
To leverage the multi-layered institutional architecture in the financial services area, the Joint Committee of the ESAs should continue to ensure overall cross-sectoral coordination in relation to all matters pertaining to ICT risk, in accordance with its tasks on cybersecurity. It should be supported by a new Subcommittee (the ‘Oversight Forum’) carrying out preparatory work both for the individual decisions addressed to critical ICT third-party service providers, and for the issuing of collective recommendations, in particular in relation to benchmarking the oversight programmes for critical ICT third-party service providers, and identifying best practices for addressing ICT concentration risk issues.
To ensure that critical ICT third-party service providers are appropriately and effectively overseen on a Union level, this Regulation provides that any of the three ESAs could be designated as a Lead Overseer. The individual assignment of a critical ICT third-party service provider to one of the three ESAs should result from an assessment of the preponderance of financial entities operating in the financial sectors for which that ESA has responsibilities. This approach should lead to a balanced allocation of tasks and responsibilities between the three ESAs, in the context of exercising the oversight functions, and should make the best use of the human resources and technical expertise available in each of the three ESAs.
Lead Overseers should be granted the necessary powers to conduct investigations, to carry out onsite and offsite inspections at the premises and locations of critical ICT third-party service providers and to obtain complete and updated information. Those powers should enable the Lead Overseer to acquire real insight into the type, dimension and impact of the ICT third-party risk posed to financial entities and ultimately to the Union’s financial system. Entrusting the ESAs with the lead oversight role is a prerequisite for understanding and addressing the systemic dimension of ICT risk in finance. The impact of critical ICT third-party service providers on the Union financial sector and the potential issues caused by the ICT concentration risk entailed call for taking a collective approach at Union level. The simultaneous carrying out of multiple audits and access rights, performed separately by numerous competent authorities, with little or no coordination among them, would prevent financial supervisors from obtaining a complete and comprehensive overview of ICT third-party risk in the Union, while also creating redundancy, burden and complexity for critical ICT third-party service providers if they were subject to numerous monitoring and inspection requests.
Due to the significant impact of being designated as critical, this Regulation should ensure that the rights of critical ICT third-party service providers are observed throughout the implementation of the Oversight Framework. Prior to being designated as critical, such providers should, for example, have the right to submit to the Lead Overseer a reasoned statement containing any relevant information for the purposes of the assessment related to their designation. Since the Lead Overseer should be empowered to submit recommendations on ICT risk matters and suitable remedies thereto, which include the power to oppose certain contractual arrangements ultimately affecting the stability of the financial entity or the financial system, critical ICT third-party service providers should also be given the opportunity to provide, prior to the finalisation of those recommendations, explanations regarding the expected impact of the solutions, envisaged in the recommendations, on customers that are entities falling outside the scope of this Regulation and to formulate solutions to mitigate risks. Critical ICT third-party service providers disagreeing with the recommendations should submit a reasoned explanation of their intention not to endorse the recommendation. Where such reasoned explanation is not submitted or where it is considered to be insufficient, the Lead Overseer should issue a public notice summarily describing the matter of non-compliance.
Competent authorities should duly include the task of verifying substantive compliance with recommendations issued by the Lead Overseer in their functions with regard to prudential supervision of financial entities. Competent authorities should be able to require financial entities to take additional measures to address the risks identified in the Lead Overseer’s recommendations, and should, in due course, issue notifications to that effect. Where the Lead Overseer addresses recommendations to critical ICT third-party service providers that are supervised under Directive (EU) 2022/2555, the competent authorities should be able, on a voluntary basis and before adopting additional measures, to consult the competent authorities under that Directive in order to foster a coordinated approach to dealing with the critical ICT third-party service providers in question.
The exercise of the oversight should be guided by three operational principles seeking to ensure: (a) close coordination among the ESAs in their Lead Overseer roles, through a joint oversight network (JON), (b) consistency with the framework established by Directive (EU) 2022/2555 (through a voluntary consultation of bodies under that Directive to avoid duplication of measures directed at critical ICT third-party service providers), and (c) applying diligence to minimise the potential risk of disruption to services provided by the critical ICT third-party service providers to customers that are entities falling outside the scope of this Regulation.
The Oversight Framework should not replace, or in any way or for any part substitute for, the requirement for financial entities to manage themselves the risks entailed by the use of ICT third-party service providers, including their obligation to maintain an ongoing monitoring of contractual arrangements concluded with critical ICT third-party service providers. Similarly, the Oversight Framework should not affect the full responsibility of financial entities for complying with, and discharging, all the legal obligations laid down in this Regulation and in the relevant financial services law.
To avoid duplications and overlaps, competent authorities should refrain from taking individually any measures aiming to monitor the critical ICT third-party service provider’s risks and should, in that respect, rely on the relevant Lead Overseer’s assessment. Any measures should in any case be coordinated and agreed in advance with the Lead Overseer in the context of the exercise of tasks in the Oversight Framework.
To promote convergence at international level as regards the use of best practices in the review and monitoring of ICT third-party service providers’ digital risk-management, the ESAs should be encouraged to conclude cooperation arrangements with relevant supervisory and regulatory third-country authorities.
To leverage the specific competences, technical skills and expertise of staff specialising in operational and ICT risk within the competent authorities, the three ESAs and, on a voluntary basis, the competent authorities under Directive (EU) 2022/2555, the Lead Overseer should draw on national supervisory capabilities and knowledge and set up dedicated examination teams for each critical ICT third-party service provider, pooling multidisciplinary teams in support of the preparation and execution of oversight activities, including general investigations and inspections of critical ICT third-party service providers, as well as for any necessary follow-up thereto.
Whereas costs resulting from oversight tasks would be fully funded from fees levied on critical ICT third-party service providers, the ESAs are. however, likely to incur, before the start of the Oversight Framework, costs for the implementation of dedicated ICT systems supporting the upcoming oversight, since dedicated ICT systems would need to be developed and deployed beforehand. This Regulation therefore provides for a hybrid funding model, whereby the Oversight Framework would, as such, be fully fee-funded, while the development of the ESAs’ ICT systems would be funded from Union and national competent authorities’ contributions.
Competent authorities should have all required supervisory, investigative and sanctioning powers to ensure the proper exercise of their duties under this Regulation. They should, in principle, publish notices of the administrative penalties they impose. Since financial entities and ICT third-party service providers can be established in different Member States and supervised by different competent authorities, the application of this Regulation should be facilitated by, on the one hand, close cooperation among relevant competent authorities, including the ECB with regard to specific tasks conferred on it by Council Regulation (EU) No 1024/2013, and, on the other hand, by consultation with the ESAs through the mutual exchange of information and the provision of assistance in the context of relevant supervisory activities.
In order to further quantify and qualify the criteria for the designation of ICT third-party service providers as critical and to harmonise oversight fees, the power to adopt acts in accordance with Article 290 TFEU should be delegated to the Commission to supplement this Regulation by further specifying the systemic impact that a failure or operational outage of an ICT third-party service provider could have on the financial entities it provides ICT services to, the number of global systemically important institutions (G-SIIs), or other systemically important institutions (O-SIIs), that rely on the ICT third-party service provider in question, the number of ICT third-party service providers active on a given market, the costs of migrating data and ICT workloads to other ICT third-party service providers, as well as the amount of the oversight fees and the way in which they are to be paid. It is of particular importance that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and that those consultations be conducted in accordance with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making (22). In particular, to ensure equal participation in the preparation of delegated acts, the European Parliament and the Council should receive all documents at the same time as Member States’ experts, and their experts should systematically have access to meetings of Commission expert groups dealing with the preparation of delegated acts.
Regulatory technical standards should ensure the consistent harmonisation of the requirements laid down in this Regulation. In their roles as bodies endowed with highly specialised expertise, the ESAs should develop draft regulatory technical standards which do not involve policy choices, for submission to the Commission. Regulatory technical standards should be developed in the areas of ICT risk management, major ICT-related incident reporting, testing, as well as in relation to key requirements for a sound monitoring of ICT third-party risk. The Commission and the ESAs should ensure that those standards and requirements can be applied by all financial entities in a manner that is proportionate to their size and overall risk profile, and the nature, scale and complexity of their services, activities and operations. The Commission should be empowered to adopt those regulatory technical standards by means of delegated acts pursuant to Article 290 TFEU and in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
To facilitate the comparability of reports on major ICT-related incidents and major operational or security payment-related incidents, as well as to ensure transparency regarding contractual arrangements for the use of ICT services provided by ICT third-party service providers, the ESAs should develop draft implementing technical standards establishing standardised templates, forms and procedures for financial entities to report a major ICT-related incident and a major operational or security payment-related incident, as well as standardised templates for the register of information. When developing those standards, the ESAs should take into account the size and the overall risk profile of the financial entity, and the nature, scale and complexity of its services, activities and operations. The Commission should be empowered to adopt those implementing technical standards by means of implementing acts pursuant to Article 291 TFEU and in accordance with Article 15 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
Since further requirements have already been specified through delegated and implementing acts based on technical regulatory and implementing technical standards in Regulations (EC) No 1060/2009 (23), (EU) No 648/2012 (24), (EU) No 600/2014 (25) and (EU) No 909/2014 (26) of the European Parliament and of the Council, it is appropriate to mandate the ESAs, either individually or jointly through the Joint Committee, to submit regulatory and implementing technical standards to the Commission for adoption of delegated and implementing acts carrying over and updating existing ICT risk management rules.
Since this Regulation, together with Directive (EU) 2022/2556 of the European Parliament and of the Council (27), entails a consolidation of the ICT risk management provisions across multiple regulations and directives of the Union’s financial services acquis, including Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014 and (EU) No 909/2014, and Regulation (EU) 2016/1011 of the European Parliament and of the Council (28), in order to ensure full consistency, those Regulations should be amended to clarify that the applicable ICT risk-related provisions are laid down in this Regulation.
Consequently, the scope of the relevant articles related to operational risk, upon which empowerments laid down in Regulations (EC) No 1060/2009, (EU) No 648/2012, (EU) No 600/2014, (EU) No 909/2014, and (EU) 2016/1011 had mandated the adoption of delegated and implementing acts, should be narrowed down with a view to carry over into this Regulation all provisions covering the digital operational resilience aspects which today are part of those Regulations.
The potential systemic cyber risk associated with the use of ICT infrastructures that enable the operation of payment systems and the provision of payment processing activities should be duly addressed at Union level through harmonised digital resilience rules. To that effect, the Commission should swiftly assess the need for reviewing the scope of this Regulation while aligning such review with the outcome of the comprehensive review envisaged under Directive (EU) 2015/2366. Numerous large-scale attacks over the past decade demonstrate how payment systems have become exposed to cyber threats. Placed at the core of the payment services chain and showing strong interconnections with the overall financial system, payment systems and payment processing activities acquired a critical significance for the functioning of the Union financial markets. Cyber-attacks on such systems can cause severe operational business disruptions with direct repercussions on key economic functions, such as the facilitation of payments, and indirect effects on related economic processes. Until a harmonised regime and the supervision of operators of payment systems and processing entities are put in place at Union level, Member States may, with a view to applying similar market practices, draw inspiration from the digital operational resilience requirements laid down by this Regulation, when applying rules to operators of payment systems and processing entities supervised under their own jurisdictions.
Since the objective of this Regulation, namely to achieve a high level of digital operational resilience for regulated financial entities, cannot be sufficiently achieved by the Member States because it requires harmonisation of various different rules in Union and national law, but can rather, by reason of its scale and effects, be better achieved at Union level, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union. In accordance with the principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order to achieve that objective.
The European Data Protection Supervisor was consulted in accordance with Article 42(1) of Regulation (EU) 2018/1725 of the European Parliament and of the Council (29) and delivered an opinion on 10 May 2021 (30),
HAVE ADOPTED THIS REGULATION:
(a) requirements applicable to financial entities in relation to:
(i) information and communication technology (ICT) risk management;
(ii) reporting of major ICT-related incidents and notifying, on a voluntary basis, significant cyber threats to the competent authorities;
(iii) reporting of major operational or security payment-related incidents to the competent authorities by financial entities referred to in Article 2(1), points (a) to (d);
(iv) digital operational resilience testing;
(v) information and intelligence sharing in relation to cyber threats and vulnerabilities;
(vi) measures for the sound management of ICT third-party risk;
(b) requirements in relation to the contractual arrangements concluded between ICT third-party service providers and financial entities;
(c) rules for the establishment and conduct of the Oversight Framework for critical ICT third-party service providers when providing services to financial entities;
(d) rules on cooperation among competent authorities, and rules on supervision and enforcement by competent authorities in relation to all matters covered by this Regulation.
In relation to financial entities identified as essential or important entities pursuant to national rules transposing Article 3 of Directive (EU) 2022/2555, this Regulation shall be considered a sector-specific Union legal act for the purposes of Article 4 of that Directive.
This Regulation is without prejudice to the responsibility of Member States’ regarding essential State functions concerning public security, defence and national security in accordance with Union law.
(a) credit institutions;
(b) payment institutions, including payment institutions exempted pursuant to Directive (EU) 2015/2366;
(c) account information service providers;
(d) electronic money institutions, including electronic money institutions exempted pursuant to Directive 2009/110/EC;
(e) investment firms;
(f) crypto-asset service providers as authorised under a Regulation of the European Parliament and of the Council on markets in crypto-assets, and amending Regulations (EU) No 1093/2010 and (EU) No 1095/2010 and Directives 2013/36/EU and (EU) 2019/1937 (‘the Regulation on markets in crypto-assets’) and issuers of asset-referenced tokens;
(g) central securities depositories;
(h) central counterparties;
(i) trading venues;
(j) trade repositories;
(k) managers of alternative investment funds;
(l) management companies;
(m) data reporting service providers;
(n) insurance and reinsurance undertakings;
(o) insurance intermediaries, reinsurance intermediaries and ancillary insurance intermediaries;
(p) institutions for occupational retirement provision;
(q) credit rating agencies;
(r) administrators of critical benchmarks;
(s) crowdfunding service providers;
(t) securitisation repositories;
(u) ICT third-party service providers.
For the purposes of this Regulation, entities referred to in paragraph 1, points (a) to (t), shall collectively be referred to as ‘financial entities’.
This Regulation does not apply to:
(a) managers of alternative investment funds as referred to in Article 3(2) of Directive 2011/61/EU;
(b) insurance and reinsurance undertakings as referred to in Article 4 of Directive 2009/138/EC;
(c) institutions for occupational retirement provision which operate pension schemes which together do not have more than 15 members in total;
(d) natural or legal persons exempted pursuant to Articles 2 and 3 of Directive 2014/65/EU;
(e) insurance intermediaries, reinsurance intermediaries and ancillary insurance intermediaries which are microenterprises or small or medium-sized enterprises;
(f) post office giro institutions as referred to in Article 2(5), point (3), of Directive 2013/36/EU.
For the purposes of this Regulation, the following definitions shall apply:
‘digital operational resilience’ means the ability of a financial entity to build, assure and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions;
‘network and information system’ means a network and information system as defined in Article 6, point 1, of Directive (EU) 2022/2555;
‘legacy ICT system’ means an ICT system that has reached the end of its lifecycle (end-of-life), that is not suitable for upgrades or fixes, for technological or commercial reasons, or is no longer supported by its supplier or by an ICT third-party service provider, but that is still in use and supports the functions of the financial entity;
‘security of network and information systems’ means security of network and information systems as defined in Article 6, point 2, of Directive (EU) 2022/2555;
‘ICT risk’ means any reasonably identifiable circumstance in relation to the use of network and information systems which, if materialised, may compromise the security of the network and information systems, of any technology dependent tool or process, of operations and processes, or of the provision of services by producing adverse effects in the digital or physical environment;
‘information asset’ means a collection of information, either tangible or intangible, that is worth protecting;
‘ICT asset’ means a software or hardware asset in the network and information systems used by the financial entity;
‘ICT-related incident’ means a single event or a series of linked events unplanned by the financial entity that compromises the security of the network and information systems, and have an adverse impact on the availability, authenticity, integrity or confidentiality of data, or on the services provided by the financial entity;
‘operational or security payment-related incident’ means a single event or a series of linked events unplanned by the financial entities referred to in Article 2(1), points (a) to (d), whether ICT-related or not, that has an adverse impact on the availability, authenticity, integrity or confidentiality of payment-related data, or on the payment-related services provided by the financial entity;
‘major ICT-related incident’ means an ICT-related incident that has a high adverse impact on the network and information systems that support critical or important functions of the financial entity;
‘major operational or security payment-related incident’ means an operational or security payment-related incident that has a high adverse impact on the payment-related services provided;
‘cyber threat’ means ‘cyber threat’ as defined in Article 2, point (8), of Regulation (EU) 2019/881;
‘significant cyber threat’ means a cyber threat the technical characteristics of which indicate that it could have the potential to result in a major ICT-related incident or a major operational or security payment-related incident;
‘cyber-attack’ means a malicious ICT-related incident caused by means of an attempt perpetrated by any threat actor to destroy, expose, alter, disable, steal or gain unauthorised access to, or make unauthorised use of, an asset;
‘threat intelligence’ means information that has been aggregated, transformed, analysed, interpreted or enriched to provide the necessary context for decision-making and to enable relevant and sufficient understanding in order to mitigate the impact of an ICT-related incident or of a cyber threat, including the technical details of a cyber-attack, those responsible for the attack and their modus operandi and motivations;
‘vulnerability’ means a weakness, susceptibility or flaw of an asset, system, process or control that can be exploited;
‘threat-led penetration testing (TLPT)’ means a framework that mimics the tactics, techniques and procedures of real-life threat actors perceived as posing a genuine cyber threat, that delivers a controlled, bespoke, intelligence-led (red team) test of the financial entity’s critical live production systems;
‘ICT third-party risk’ means an ICT risk that may arise for a financial entity in relation to its use of ICT services provided by ICT third-party service providers or by subcontractors of the latter, including through outsourcing arrangements;
‘ICT third-party service provider’ means an undertaking providing ICT services;
‘ICT intra-group service provider’ means an undertaking that is part of a financial group and that provides predominantly ICT services to financial entities within the same group or to financial entities belonging to the same institutional protection scheme, including to their parent undertakings, subsidiaries, branches or other entities that are under common ownership or control;
‘ICT services’ means digital and data services provided through ICT systems to one or more internal or external users on an ongoing basis, including hardware as a service and hardware services which includes the provision of technical support via software or firmware updates by the hardware provider, excluding traditional analogue telephone services;
‘critical or important function’ means a function, the disruption of which would materially impair the financial performance of a financial entity, or the soundness or continuity of its services and activities, or the discontinued, defective or failed performance of that function would materially impair the continuing compliance of a financial entity with the conditions and obligations of its authorisation, or with its other obligations under applicable financial services law;
‘critical ICT third-party service provider’ means an ICT third-party service provider designated as critical in accordance with Article 31;
‘ICT third-party service provider established in a third country’ means an ICT third-party service provider that is a legal person established in a third-country and that has entered into a contractual arrangement with a financial entity for the provision of ICT services;
‘subsidiary’ means a subsidiary undertaking within the meaning of Article 2, point (10), and Article 22 of Directive 2013/34/EU;
‘group’ means a group as defined in Article 2, point (11), of Directive 2013/34/EU;
‘parent undertaking’ means a parent undertaking within the meaning of Article 2, point (9), and Article 22 of Directive 2013/34/EU;
‘ICT subcontractor established in a third country’ means an ICT subcontractor that is a legal person established in a third-country and that has entered into a contractual arrangement either with an ICT third-party service provider, or with an ICT third-party service provider established in a third country;
‘ICT concentration risk’ means an exposure to individual or multiple related critical ICT third-party service providers creating a degree of dependency on such providers so that the unavailability, failure or other type of shortfall of such provider may potentially endanger the ability of a financial entity to deliver critical or important functions, or cause it to suffer other types of adverse effects, including large losses, or endanger the financial stability of the Union as a whole;
‘management body’ means a management body as defined in Article 4(1), point (36), of Directive 2014/65/EU, Article 3(1), point (7), of Directive 2013/36/EU, Article 2(1), point (s), of Directive 2009/65/EC of the European Parliament and of the Council (31), Article 2(1), point (45), of Regulation (EU) No 909/2014, Article 3(1), point (20), of Regulation (EU) 2016/1011, and in the relevant provision of the Regulation on markets in crypto-assets, or the equivalent persons who effectively run the entity or have key functions in accordance with relevant Union or national law;
‘credit institution’ means a credit institution as defined in Article 4(1), point (1), of Regulation (EU) No 575/2013 of the European Parliament and of the Council (32);
‘institution exempted pursuant to Directive 2013/36/EU’ means an entity as referred to in Article 2(5), points (4) to (23), of Directive 2013/36/EU;
‘investment firm’ means an investment firm as defined in Article 4(1), point (1), of Directive 2014/65/EU;
‘small and non-interconnected investment firm’ means an investment firm that meets the conditions laid out in Article 12(1) of Regulation (EU) 2019/2033 of the European Parliament and of the Council (33);
‘payment institution’ means a payment institution as defined in Article 4, point (4), of Directive (EU) 2015/2366;
‘payment institution exempted pursuant to Directive (EU) 2015/2366’ means a payment institution exempted pursuant to Article 32(1) of Directive (EU) 2015/2366;
‘account information service provider’ means an account information service provider as referred to in Article 33(1) of Directive (EU) 2015/2366;
‘electronic money institution’ means an electronic money institution as defined in Article 2, point (1), of Directive 2009/110/EC of the European Parliament and of the Council;
‘electronic money institution exempted pursuant to Directive 2009/110/EC’ means an electronic money institution benefitting from a waiver as referred to in Article 9(1) of Directive 2009/110/EC;
‘central counterparty’ means a central counterparty as defined in Article 2, point (1), of Regulation (EU) No 648/2012;
‘trade repository’ means a trade repository as defined in Article 2, point (2), of Regulation (EU) No 648/2012;
‘central securities depository’ means a central securities depository as defined in Article 2(1), point (1), of Regulation (EU) No 909/2014;
‘trading venue’ means a trading venue as defined in Article 4(1), point (24), of Directive 2014/65/EU;
‘manager of alternative investment funds’ means a manager of alternative investment funds as defined in Article 4(1), point (b), of Directive 2011/61/EU;
‘management company’ means a management company as defined in Article 2(1), point (b), of Directive 2009/65/EC;
‘data reporting service provider’ means a data reporting service provider within the meaning of Regulation (EU) No 600/2014, as referred to in Article 2(1), points (34) to (36) thereof;
‘insurance undertaking’ means an insurance undertaking as defined in Article 13, point (1), of Directive 2009/138/EC;
‘reinsurance undertaking’ means a reinsurance undertaking as defined in Article 13, point (4), of Directive 2009/138/EC;
‘insurance intermediary’ means an insurance intermediary as defined in Article 2(1), point (3), of Directive (EU) 2016/97 of the European Parliament and of the Council (34);
‘ancillary insurance intermediary’ means an ancillary insurance intermediary as defined in Article 2(1), point (4), of Directive (EU) 2016/97;
‘reinsurance intermediary’ means a reinsurance intermediary as defined in Article 2(1), point (5), of Directive (EU) 2016/97;
‘institution for occupational retirement provision’ means an institution for occupational retirement provision as defined in Article 6, point (1), of Directive (EU) 2016/2341;
‘small institution for occupational retirement provision’ means an institution for occupational retirement provision which operates pension schemes which together have less than 100 members in total;
‘credit rating agency’ means a credit rating agency as defined in Article 3(1), point (b), of Regulation (EC) No 1060/2009;
‘crypto-asset service provider’ means a crypto-asset service provider as defined in the relevant provision of the Regulation on markets in crypto-assets;
‘issuer of asset-referenced tokens’ means an issuer of asset-referenced tokens as defined in the relevant provision of the Regulation on markets in crypto-assets;
‘administrator of critical benchmarks’ means an administrator of ‘critical benchmarks’ as defined in Article 3(1), point (25), of Regulation (EU) 2016/1011;
‘crowdfunding service provider’ means a crowdfunding service provider as defined in Article 2(1), point (e), of Regulation (EU) 2020/1503 of the European Parliament and of the Council (35);
‘securitisation repository’ means a securitisation repository as defined in Article 2, point (23), of Regulation (EU) 2017/2402 of the European Parliament and of the Council (36);
‘microenterprise’ means a financial entity, other than a trading venue, a central counterparty, a trade repository or a central securities depository, which employs fewer than 10 persons and has an annual turnover and/or annual balance sheet total that does not exceed EUR 2 million;
‘Lead Overseer’ means the European Supervisory Authority appointed in accordance with Article 31(1), point (b) of this Regulation;
‘Joint Committee’ means the committee referred to in Article 54 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010;
‘small enterprise’ means a financial entity that employs 10 or more persons, but fewer than 50 persons, and has an annual turnover and/or annual balance sheet total that exceeds EUR 2 million, but does not exceed EUR 10 million;
‘medium-sized enterprise’ means a financial entity that is not a small enterprise and employs fewer than 250 persons and has an annual turnover that does not exceed EUR 50 million and/or an annual balance sheet that does not exceed EUR 43 million;
‘public authority’ means any government or other public administration entity, including national central banks.
Financial entities shall implement the rules laid down in Chapter II in accordance with the principle of proportionality, taking into account their size and overall risk profile, and the nature, scale and complexity of their services, activities and operations.
In addition, the application by financial entities of Chapters III, IV and V, Section I, shall be proportionate to their size and overall risk profile, and to the nature, scale and complexity of their services, activities and operations, as specifically provided for in the relevant rules of those Chapters.
The competent authorities shall consider the application of the proportionality principle by financial entities when reviewing the consistency of the ICT risk management framework on the basis of the reports submitted upon the request of competent authorities pursuant to Article 6(5) and Article 16(2).
Financial entities shall have in place an internal governance and control framework that ensures an effective and prudent management of ICT risk, in accordance with Article 6(4), in order to achieve a high level of digital operational resilience.
The management body of the financial entity shall define, approve, oversee and be responsible for the implementation of all arrangements related to the ICT risk management framework referred to in Article 6(1).
For the purposes of the first subparagraph, the management body shall:
(a) bear the ultimate responsibility for managing the financial entity’s ICT risk;
(b) put in place policies that aim to ensure the maintenance of high standards of availability, authenticity, integrity and confidentiality, of data;
(c) set clear roles and responsibilities for all ICT-related functions and establish appropriate governance arrangements to ensure effective and timely communication, cooperation and coordination among those functions;
(d) bear the overall responsibility for setting and approving the digital operational resilience strategy as referred to in Article 6(8), including the determination of the appropriate risk tolerance level of ICT risk of the financial entity, as referred to in Article 6(8), point (b);
(e) approve, oversee and periodically review the implementation of the financial entity’s ICT business continuity policy and ICT response and recovery plans, referred to, respectively, in Article 11(1) and (3), which may be adopted as a dedicated specific policy forming an integral part of the financial entity’s overall business continuity policy and response and recovery plan;
(f) approve and periodically review the financial entity’s ICT internal audit plans, ICT audits and material modifications to them;
(g) allocate and periodically review the appropriate budget to fulfil the financial entity’s digital operational resilience needs in respect of all types of resources, including relevant ICT security awareness programmes and digital operational resilience training referred to in Article 13(6), and ICT skills for all staff;
(h) approve and periodically review the financial entity’s policy on arrangements regarding the use of ICT services provided by ICT third-party service providers;
(i) put in place, at corporate level, reporting channels enabling it to be duly informed of the following:
(i) arrangements concluded with ICT third-party service providers on the use of ICT services,
(ii) any relevant planned material changes regarding the ICT third-party service providers,
(iii) the potential impact of such changes on the critical or important functions subject to those arrangements, including a risk analysis summary to assess the impact of those changes, and at least major ICT-related incidents and their impact, as well as response, recovery and corrective measures.
Financial entities, other than microenterprises, shall establish a role in order to monitor the arrangements concluded with ICT third-party service providers on the use of ICT services, or shall designate a member of senior management as responsible for overseeing the related risk exposure and relevant documentation.
Members of the management body of the financial entity shall actively keep up to date with sufficient knowledge and skills to understand and assess ICT risk and its impact on the operations of the financial entity, including by following specific training on a regular basis, commensurate to the ICT risk being managed.
Financial entities shall have a sound, comprehensive and well-documented ICT risk management framework as part of their overall risk management system, which enables them to address ICT risk quickly, efficiently and comprehensively and to ensure a high level of digital operational resilience.
The ICT risk management framework shall include at least strategies, policies, procedures, ICT protocols and tools that are necessary to duly and adequately protect all information assets and ICT assets, including computer software, hardware, servers, as well as to protect all relevant physical components and infrastructures, such as premises, data centres and sensitive designated areas, to ensure that all information assets and ICT assets are adequately protected from risks including damage and unauthorised access or usage.
In accordance with their ICT risk management framework, financial entities shall minimise the impact of ICT risk by deploying appropriate strategies, policies, procedures, ICT protocols and tools. They shall provide complete and updated information on ICT risk and on their ICT risk management framework to the competent authorities upon their request.
Financial entities, other than microenterprises, shall assign the responsibility for managing and overseeing ICT risk to a control function and ensure an appropriate level of independence of such control function in order to avoid conflicts of interest. Financial entities shall ensure appropriate segregation and independence of ICT risk management functions, control functions, and internal audit functions, according to the three lines of defence model, or an internal risk management and control model.
The ICT risk management framework shall be documented and reviewed at least once a year, or periodically in the case of microenterprises, as well as upon the occurrence of major ICT-related incidents, and following supervisory instructions or conclusions derived from relevant digital operational resilience testing or audit processes. It shall be continuously improved on the basis of lessons derived from implementation and monitoring. A report on the review of the ICT risk management framework shall be submitted to the competent authority upon its request.
The ICT risk management framework of financial entities, other than microenterprises, shall be subject to internal audit by auditors on a regular basis in line with the financial entities’ audit plan. Those auditors shall possess sufficient knowledge, skills and expertise in ICT risk, as well as appropriate independence. The frequency and focus of ICT audits shall be commensurate to the ICT risk of the financial entity.
Based on the conclusions from the internal audit review, financial entities shall establish a formal follow-up process, including rules for the timely verification and remediation of critical ICT audit findings.
The ICT risk management framework shall include a digital operational resilience strategy setting out how the framework shall be implemented. To that end, the digital operational resilience strategy shall include methods to address ICT risk and attain specific ICT objectives, by:
(a) explaining how the ICT risk management framework supports the financial entity’s business strategy and objectives;
(b) establishing the risk tolerance level for ICT risk, in accordance with the risk appetite of the financial entity, and analysing the impact tolerance for ICT disruptions;
(c) setting out clear information security objectives, including key performance indicators and key risk metrics;
(d) explaining the ICT reference architecture and any changes needed to reach specific business objectives;
(e) outlining the different mechanisms put in place to detect ICT-related incidents, prevent their impact and provide protection from it;
(f) evidencing the current digital operational resilience situation on the basis of the number of major ICT-related incidents reported and the effectiveness of preventive measures;
(g) implementing digital operational resilience testing, in accordance with Chapter IV of this Regulation;
(h) outlining a communication strategy in the event of ICT-related incidents the disclosure of which is required in accordance with Article 14.
Financial entities may, in the context of the digital operational resilience strategy referred to in paragraph 8, define a holistic ICT multi-vendor strategy, at group or entity level, showing key dependencies on ICT third-party service providers and explaining the rationale behind the procurement mix of ICT third-party service providers.
Financial entities may, in accordance with Union and national sectoral law, outsource the tasks of verifying compliance with ICT risk management requirements to intra-group or external undertakings. In case of such outsourcing, the financial entity remains fully responsible for the verification of compliance with the ICT risk management requirements.
In order to address and manage ICT risk, financial entities shall use and maintain updated ICT systems, protocols and tools that are:
(a) appropriate to the magnitude of operations supporting the conduct of their activities, in accordance with the proportionality principle as referred to in Article 4;
(b) reliable;
(c) equipped with sufficient capacity to accurately process the data necessary for the performance of activities and the timely provision of services, and to deal with peak orders, message or transaction volumes, as needed, including where new technology is introduced;
(d) technologically resilient in order to adequately deal with additional information processing needs as required under stressed market conditions or other adverse situations.
As part of the ICT risk management framework referred to in Article 6(1), financial entities shall identify, classify and adequately document all ICT supported business functions, roles and responsibilities, the information assets and ICT assets supporting those functions, and their roles and dependencies in relation to ICT risk. Financial entities shall review as needed, and at least yearly, the adequacy of this classification and of any relevant documentation.
Financial entities shall, on a continuous basis, identify all sources of ICT risk, in particular the risk exposure to and from other financial entities, and assess cyber threats and ICT vulnerabilities relevant to their ICT supported business functions, information assets and ICT assets. Financial entities shall review on a regular basis, and at least yearly, the risk scenarios impacting them.
Financial entities, other than microenterprises, shall perform a risk assessment upon each major change in the network and information system infrastructure, in the processes or procedures affecting their ICT supported business functions, information assets or ICT assets.
Financial entities shall identify all information assets and ICT assets, including those on remote sites, network resources and hardware equipment, and shall map those considered critical. They shall map the configuration of the information assets and ICT assets and the links and interdependencies between the different information assets and ICT assets.
Financial entities shall identify and document all processes that are dependent on ICT third-party service providers, and shall identify interconnections with ICT third-party service providers that provide services that support critical or important functions.
For the purposes of paragraphs 1, 4 and 5, financial entities shall maintain relevant inventories and update them periodically and every time any major change as referred to in paragraph 3 occurs.
Financial entities, other than microenterprises, shall on a regular basis, and at least yearly, conduct a specific ICT risk assessment on all legacy ICT systems and, in any case before and after connecting technologies, applications or systems.
For the purposes of adequately protecting ICT systems and with a view to organising response measures, financial entities shall continuously monitor and control the security and functioning of ICT systems and tools and shall minimise the impact of ICT risk on ICT systems through the deployment of appropriate ICT security tools, policies and procedures.
Financial entities shall design, procure and implement ICT security policies, procedures, protocols and tools that aim to ensure the resilience, continuity and availability of ICT systems, in particular for those supporting critical or important functions, and to maintain high standards of availability, authenticity, integrity and confidentiality of data, whether at rest, in use or in transit.
In order to achieve the objectives referred to in paragraph 2, financial entities shall use ICT solutions and processes that are appropriate in accordance with Article 4. Those ICT solutions and processes shall:
(a) ensure the security of the means of transfer of data;
(b) minimise the risk of corruption or loss of data, unauthorised access and technical flaws that may hinder business activity;
(c) prevent the lack of availability, the impairment of the authenticity and integrity, the breaches of confidentiality and the loss of data;
(d) ensure that data is protected from risks arising from data management, including poor administration, processing-related risks and human error.
(a) develop and document an information security policy defining rules to protect the availability, authenticity, integrity and confidentiality of data, information assets and ICT assets, including those of their customers, where applicable;
(b) following a risk-based approach, establish a sound network and infrastructure management structure using appropriate techniques, methods and protocols that may include implementing automated mechanisms to isolate affected information assets in the event of cyber-attacks;
(c) implement policies that limit the physical or logical access to information assets and ICT assets to what is required for legitimate and approved functions and activities only, and establish to that end a set of policies, procedures and controls that address access rights and ensure a sound administration thereof;
(d) implement policies and protocols for strong authentication mechanisms, based on relevant standards and dedicated control systems, and protection measures of cryptographic keys whereby data is encrypted based on results of approved data classification and ICT risk assessment processes;
(e) implement documented policies, procedures and controls for ICT change management, including changes to software, hardware, firmware components, systems or security parameters, that are based on a risk assessment approach and are an integral part of the financial entity’s overall change management process, in order to ensure that all changes to ICT systems are recorded, tested, assessed, approved, implemented and verified in a controlled manner;
(f) have appropriate and comprehensive documented policies for patches and updates.
For the purposes of the first subparagraph, point (b), financial entities shall design the network connection infrastructure in a way that allows it to be instantaneously severed or segmented in order to minimise and prevent contagion, especially for interconnected financial processes.
For the purposes of the first subparagraph, point (e), the ICT change management process shall be approved by appropriate lines of management and shall have specific protocols in place.
All detection mechanisms referred to in the first subparagraph shall be regularly tested in accordance with Article 25.
The detection mechanisms referred to in paragraph 1 shall enable multiple layers of control, define alert thresholds and criteria to trigger and initiate ICT-related incident response processes, including automatic alert mechanisms for relevant staff in charge of ICT-related incident response.
Financial entities shall devote sufficient resources and capabilities to monitor user activity, the occurrence of ICT anomalies and ICT-related incidents, in particular cyber-attacks.
Data reporting service providers shall, in addition, have in place systems that can effectively check trade reports for completeness, identify omissions and obvious errors, and request re-transmission of those reports.
As part of the ICT risk management framework referred to in Article 6(1) and based on the identification requirements set out in Article 8, financial entities shall put in place a comprehensive ICT business continuity policy, which may be adopted as a dedicated specific policy, forming an integral part of the overall business continuity policy of the financial entity.
Financial entities shall implement the ICT business continuity policy through dedicated, appropriate and documented arrangements, plans, procedures and mechanisms aiming to:
(a) ensure the continuity of the financial entity’s critical or important functions;
(b) quickly, appropriately and effectively respond to, and resolve, all ICT-related incidents in a way that limits damage and prioritises the resumption of activities and recovery actions;
(c) activate, without delay, dedicated plans that enable containment measures, processes and technologies suited to each type of ICT-related incident and prevent further damage, as well as tailored response and recovery procedures established in accordance with Article 12;
(d) estimate preliminary impacts, damages and losses;
(e) set out communication and crisis management actions that ensure that updated information is transmitted to all relevant internal staff and external stakeholders in accordance with Article 14, and report to the competent authorities in accordance with Article 19.
As part of the ICT risk management framework referred to in Article 6(1), financial entities shall implement associated ICT response and recovery plans which, in the case of financial entities other than microenterprises, shall be subject to independent internal audit reviews.
Financial entities shall put in place, maintain and periodically test appropriate ICT business continuity plans, notably with regard to critical or important functions outsourced or contracted through arrangements with ICT third-party service providers.
As part of the overall business continuity policy, financial entities shall conduct a business impact analysis (BIA) of their exposures to severe business disruptions. Under the BIA, financial entities shall assess the potential impact of severe business disruptions by means of quantitative and qualitative criteria, using internal and external data and scenario analysis, as appropriate. The BIA shall consider the criticality of identified and mapped business functions, support processes, third-party dependencies and information assets, and their interdependencies. Financial entities shall ensure that ICT assets and ICT services are designed and used in full alignment with the BIA, in particular with regard to adequately ensuring the redundancy of all critical components.
As part of their comprehensive ICT risk management, financial entities shall:
(a) test the ICT business continuity plans and the ICT response and recovery plans in relation to ICT systems supporting all functions at least yearly, as well as in the event of any substantive changes to ICT systems supporting critical or important functions;
(b) test the crisis communication plans established in accordance with Article 14.
For the purposes of the first subparagraph, point (a), financial entities, other than microenterprises, shall include in the testing plans scenarios of cyber-attacks and switchovers between the primary ICT infrastructure and the redundant capacity, backups and redundant facilities necessary to meet the obligations set out in Article 12.
Financial entities shall regularly review their ICT business continuity policy and ICT response and recovery plans, taking into account the results of tests carried out in accordance with the first subparagraph and recommendations stemming from audit checks or supervisory reviews.
Financial entities, other than microenterprises, shall have a crisis management function, which, in the event of activation of their ICT business continuity plans or ICT response and recovery plans, shall, inter alia, set out clear procedures to manage internal and external crisis communications in accordance with Article 14.
Financial entities shall keep readily accessible records of activities before and during disruption events when their ICT business continuity plans and ICT response and recovery plans are activated.
Central securities depositories shall provide the competent authorities with copies of the results of the ICT business continuity tests, or of similar exercises.
Financial entities, other than microenterprises, shall report to the competent authorities, upon their request, an estimation of aggregated annual costs and losses caused by major ICT-related incidents.
In accordance with Article 16 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010, the ESAs, through the Joint Committee, shall by 17 July 2024 develop common guidelines on the estimation of aggregated annual costs and losses referred to in paragraph 10.
(a) backup policies and procedures specifying the scope of the data that is subject to the backup and the minimum frequency of the backup, based on the criticality of information or the confidentiality level of the data;
(b) restoration and recovery procedures and methods.
Financial entities shall set up backup systems that can be activated in accordance with the backup policies and procedures, as well as restoration and recovery procedures and methods. The activation of backup systems shall not jeopardise the security of the network and information systems or the availability, authenticity, integrity or confidentiality of data. Testing of the backup procedures and restoration and recovery procedures and methods shall be undertaken periodically.
When restoring backup data using own systems, financial entities shall use ICT systems that are physically and logically segregated from the source ICT system. The ICT systems shall be securely protected from any unauthorised access or ICT corruption and allow for the timely restoration of services making use of data and system backups as necessary.
For central counterparties, the recovery plans shall enable the recovery of all transactions at the time of disruption to allow the central counterparty to continue to operate with certainty and to complete settlement on the scheduled date.
Data reporting service providers shall additionally maintain adequate resources and have back-up and restoration facilities in place in order to offer and maintain their services at all times.
Financial entities, other than microenterprises, shall maintain redundant ICT capacities equipped with resources, capabilities and functions that are adequate to ensure business needs. Microenterprises shall assess the need to maintain such redundant ICT capacities based on their risk profile.
Central securities depositories shall maintain at least one secondary processing site endowed with adequate resources, capabilities, functions and staffing arrangements to ensure business needs.
The secondary processing site shall be:
(a) located at a geographical distance from the primary processing site to ensure that it bears a distinct risk profile and to prevent it from being affected by the event which has affected the primary site;
(b) capable of ensuring the continuity of critical or important functions identically to the primary site, or providing the level of services necessary to ensure that the financial entity performs its critical operations within the recovery objectives;
(c) immediately accessible to the financial entity’s staff to ensure continuity of critical or important functions in the event that the primary processing site has become unavailable.
In determining the recovery time and recovery point objectives for each function, financial entities shall take into account whether it is a critical or important function and the potential overall impact on market efficiency. Such time objectives shall ensure that, in extreme scenarios, the agreed service levels are met.
When recovering from an ICT-related incident, financial entities shall perform necessary checks, including any multiple checks and reconciliations, in order to ensure that the highest level of data integrity is maintained. These checks shall also be performed when reconstructing data from external stakeholders, in order to ensure that all data is consistent between systems.
Financial entities shall have in place capabilities and staff to gather information on vulnerabilities and cyber threats, ICT-related incidents, in particular cyber-attacks, and analyse the impact they are likely to have on their digital operational resilience.
Financial entities shall put in place post ICT-related incident reviews after a major ICT-related incident disrupts their core activities, analysing the causes of disruption and identifying required improvements to the ICT operations or within the ICT business continuity policy referred to in Article 11.
Financial entities, other than microenterprises, shall, upon request, communicate to the competent authorities, the changes that were implemented following post ICT-related incident reviews as referred to in the first subparagraph.
The post ICT-related incident reviews referred to in the first subparagraph shall determine whether the established procedures were followed and the actions taken were effective, including in relation to the following:
(a) the promptness in responding to security alerts and determining the impact of ICT-related incidents and their severity;
(b) the quality and speed of performing a forensic analysis, where deemed appropriate;
(c) the effectiveness of incident escalation within the financial entity;
(d) the effectiveness of internal and external communication.
Lessons derived from the digital operational resilience testing carried out in accordance with Articles 26 and 27 and from real life ICT-related incidents, in particular cyber-attacks, along with challenges faced upon the activation of ICT business continuity plans and ICT response and recovery plans, together with relevant information exchanged with counterparts and assessed during supervisory reviews, shall be duly incorporated on a continuous basis into the ICT risk assessment process. Those findings shall form the basis for appropriate reviews of relevant components of the ICT risk management framework referred to in Article 6(1).
Financial entities shall monitor the effectiveness of the implementation of their digital operational resilience strategy set out in Article 6(8). They shall map the evolution of ICT risk over time, analyse the frequency, types, magnitude and evolution of ICT-related incidents, in particular cyber-attacks and their patterns, with a view to understanding the level of ICT risk exposure, in particular in relation to critical or important functions, and enhance the cyber maturity and preparedness of the financial entity.
Senior ICT staff shall report at least yearly to the management body on the findings referred to in paragraph 3 and put forward recommendations.
Financial entities shall develop ICT security awareness programmes and digital operational resilience training as compulsory modules in their staff training schemes. Those programmes and training shall be applicable to all employees and to senior management staff, and shall have a level of complexity commensurate to the remit of their functions. Where appropriate, financial entities shall also include ICT third-party service providers in their relevant training schemes in accordance with Article 30(2), point (i).
Financial entities, other than microenterprises, shall monitor relevant technological developments on a continuous basis, also with a view to understanding the possible impact of the deployment of such new technologies on ICT security requirements and digital operational resilience. They shall keep up-to-date with the latest ICT risk management processes, in order to effectively combat current or new forms of cyber-attacks.
As part of the ICT risk management framework referred to in Article 6(1), financial entities shall have in place crisis communication plans enabling a responsible disclosure of, at least, major ICT-related incidents or vulnerabilities to clients and counterparts as well as to the public, as appropriate.
As part of the ICT risk management framework, financial entities shall implement communication policies for internal staff and for external stakeholders. Communication policies for staff shall take into account the need to differentiate between staff involved in ICT risk management, in particular the staff responsible for response and recovery, and staff that needs to be informed.
At least one person in the financial entity shall be tasked with implementing the communication strategy for ICT-related incidents and fulfil the public and media function for that purpose.
The ESAs shall, through the Joint Committee, in consultation with the European Union Agency on Cybersecurity (ENISA), develop common draft regulatory technical standards in order to:
(a) specify further elements to be included in the ICT security policies, procedures, protocols and tools referred to in Article 9(2), with a view to ensuring the security of networks, enable adequate safeguards against intrusions and data misuse, preserve the availability, authenticity, integrity and confidentiality of data, including cryptographic techniques, and guarantee an accurate and prompt data transmission without major disruptions and undue delays;
(b) develop further components of the controls of access management rights referred to in Article 9(4), point (c), and associated human resource policy specifying access rights, procedures for granting and revoking rights, monitoring anomalous behaviour in relation to ICT risk through appropriate indicators, including for network use patterns, hours, IT activity and unknown devices;
(c) develop further the mechanisms specified in Article 10(1) enabling a prompt detection of anomalous activities and the criteria set out in Article 10(2) triggering ICT-related incident detection and response processes;
(d) specify further the components of the ICT business continuity policy referred to in Article 11(1);
(e) specify further the testing of ICT business continuity plans referred to in Article 11(6) to ensure that such testing duly takes into account scenarios in which the quality of the provision of a critical or important function deteriorates to an unacceptable level or fails, and duly considers the potential impact of the insolvency, or other failures, of any relevant ICT third-party service provider and, where relevant, the political risks in the respective providers’ jurisdictions;
(f) specify further the components of the ICT response and recovery plans referred to in Article 11(3);
(g) specifying further the content and format of the report on the review of the ICT risk management framework referred to in Article 6(5);
When developing those draft regulatory technical standards, the ESAs shall take into account the size and the overall risk profile of the financial entity, and the nature, scale and complexity of its services, activities and operations, while duly taking into consideration any specific feature arising from the distinct nature of activities across different financial services sectors.
The ESAs shall submit those draft regulatory technical standards to the Commission by 17 January 2024.
Power is delegated to the Commission to supplement this Regulation by adopting the regulatory technical standards referred to in the first paragraph in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
Without prejudice to the first subparagraph, the entities listed in the first subparagraph shall:
(a) put in place and maintain a sound and documented ICT risk management framework that details the mechanisms and measures aimed at a quick, efficient and comprehensive management of ICT risk, including for the protection of relevant physical components and infrastructures;
(b) continuously monitor the security and functioning of all ICT systems;
(c) minimise the impact of ICT risk through the use of sound, resilient and updated ICT systems, protocols and tools which are appropriate to support the performance of their activities and the provision of services and adequately protect availability, authenticity, integrity and confidentiality of data in the network and information systems;
(d) allow sources of ICT risk and anomalies in the network and information systems to be promptly identified and detected and ICT-related incidents to be swiftly handled;
(e) identify key dependencies on ICT third-party service providers;
(f) ensure the continuity of critical or important functions, through business continuity plans and response and recovery measures, which include, at least, back-up and restoration measures;
(g) test, on a regular basis, the plans and measures referred to in point (f), as well as the effectiveness of the controls implemented in accordance with points (a) and (c);
(h) implement, as appropriate, relevant operational conclusions resulting from the tests referred to in point (g) and from post-incident analysis into the ICT risk assessment process and develop, according to needs and ICT risk profile, ICT security awareness programmes and digital operational resilience training for staff and management.
The ICT risk management framework referred to in paragraph 1, second subparagraph, point (a), shall be documented and reviewed periodically and upon the occurrence of major ICT-related incidents in compliance with supervisory instructions. It shall be continuously improved on the basis of lessons derived from implementation and monitoring. A report on the review of the ICT risk management framework shall be submitted to the competent authority upon its request.
The ESAs shall, through the Joint Committee, in consultation with the ENISA, develop common draft regulatory technical standards in order to:
(a) specify further the elements to be included in the ICT risk management framework referred to in paragraph 1, second subparagraph, point (a);
(b) specify further the elements in relation to systems, protocols and tools to minimise the impact of ICT risk referred to in paragraph 1, second subparagraph, point (c), with a view to ensuring the security of networks, enabling adequate safeguards against intrusions and data misuse and preserving the availability, authenticity, integrity and confidentiality of data;
(c) specify further the components of the ICT business continuity plans referred to in paragraph 1, second subparagraph, point (f);
(d) specify further the rules on the testing of business continuity plans and ensure the effectiveness of the controls referred to in paragraph 1, second subparagraph, point (g) and ensure that such testing duly takes into account scenarios in which the quality of the provision of a critical or important function deteriorates to an unacceptable level or fails;
(e) specify further the content and format of the report on the review of the ICT risk management framework referred to in paragraph 2.
When developing those draft regulatory technical standards, the ESAs shall take into account the size and the overall risk profile of the financial entity, and the nature, scale and complexity of its services, activities and operations.
The ESAs shall submit those draft regulatory technical standards to the Commission by 17 January 2024.
Power is delegated to the Commission to supplement this Regulation by adopting the regulatory technical standards referred to in the first subparagraph in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
Financial entities shall define, establish and implement an ICT-related incident management process to detect, manage and notify ICT-related incidents.
Financial entities shall record all ICT-related incidents and significant cyber threats. Financial entities shall establish appropriate procedures and processes to ensure a consistent and integrated monitoring, handling and follow-up of ICT-related incidents, to ensure that root causes are identified, documented and addressed in order to prevent the occurrence of such incidents.
The ICT-related incident management process referred to in paragraph 1 shall:
(a) put in place early warning indicators;
(b) establish procedures to identify, track, log, categorise and classify ICT-related incidents according to their priority and severity and according to the criticality of the services impacted, in accordance with the criteria set out in Article 18(1);
(c) assign roles and responsibilities that need to be activated for different ICT-related incident types and scenarios;
(d) set out plans for communication to staff, external stakeholders and media in accordance with Article 14 and for notification to clients, for internal escalation procedures, including ICT-related customer complaints, as well as for the provision of information to financial entities that act as counterparts, as appropriate;
(e) ensure that at least major ICT-related incidents are reported to relevant senior management and inform the management body of at least major ICT-related incidents, explaining the impact, response and additional controls to be established as a result of such ICT-related incidents;
(f) establish ICT-related incident response procedures to mitigate impacts and ensure that services become operational and secure in a timely manner.
(a) the number and/or relevance of clients or financial counterparts affected and, where applicable, the amount or number of transactions affected by the ICT-related incident, and whether the ICT-related incident has caused reputational impact;
(b) the duration of the ICT-related incident, including the service downtime;
(c) the geographical spread with regard to the areas affected by the ICT-related incident, particularly if it affects more than two Member States;
(d) the data losses that the ICT-related incident entails, in relation to availability, authenticity, integrity or confidentiality of data;
(e) the criticality of the services affected, including the financial entity’s transactions and operations;
(f) the economic impact, in particular direct and indirect costs and losses, of the ICT-related incident in both absolute and relative terms.
Financial entities shall classify cyber threats as significant based on the criticality of the services at risk, including the financial entity’s transactions and operations, number and/or relevance of clients or financial counterparts targeted and the geographical spread of the areas at risk.
The ESAs shall, through the Joint Committee and in consultation with the ECB and ENISA, develop common draft regulatory technical standards further specifying the following:
(a) the criteria set out in paragraph 1, including materiality thresholds for determining major ICT-related incidents or, as applicable, major operational or security payment-related incidents, that are subject to the reporting obligation laid down in Article 19(1);
(b) the criteria to be applied by competent authorities for the purpose of assessing the relevance of major ICT-related incidents or, as applicable, major operational or security payment-related incidents, to relevant competent authorities in other Member States’, and the details of reports of major ICT-related incidents or, as applicable, major operational or security payment-related incidents, to be shared with other competent authorities pursuant to Article 19(6) and (7);
(c) the criteria set out in paragraph 2 of this Article, including high materiality thresholds for determining significant cyber threats.
The ESAs shall submit those common draft regulatory technical standards to the Commission by 17 January 2024.
Power is delegated to the Commission to supplement this Regulation by adopting the regulatory technical standards referred to in paragraph 3 in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
Where a financial entity is subject to supervision by more than one national competent authority referred to in Article 46, Member States shall designate a single competent authority as the relevant competent authority responsible for carrying out the functions and duties provided for in this Article.
Credit institutions classified as significant, in accordance with Article 6(4) of Regulation (EU) No 1024/2013, shall report major ICT-related incidents to the relevant national competent authority designated in accordance with Article 4 of Directive 2013/36/EU, which shall immediately transmit that report to the ECB.
For the purpose of the first subparagraph, financial entities shall produce, after collecting and analysing all relevant information, the initial notification and reports referred to in paragraph 4 of this Article using the templates referred to in Article 20 and submit them to the competent authority. In the event that a technical impossibility prevents the submission of the initial notification using the template, financial entities shall notify the competent authority about it via alternative means.
The initial notification and reports referred to in paragraph 4 shall include all information necessary for the competent authority to determine the significance of the major ICT-related incident and assess possible cross-border impacts.
Without prejudice to the reporting pursuant to the first subparagraph by the financial entity to the relevant competent authority, Member States may additionally determine that some or all financial entities shall also provide the initial notification and each report referred to in paragraph 4 of this Article using the templates referred to in Article 20 to the competent authorities or the computer security incident response teams (CSIRTs) designated or established in accordance with Directive (EU) 2022/2555.
Credit institutions classified as significant, in accordance with Article 6(4) of Regulation (EU) No 1024/2013, may, on a voluntary basis, notify significant cyber threats to relevant national competent authority, designated in accordance with Article 4 of Directive 2013/36/EU, which shall immediately transmit the notification to the ECB.
Member States may determine that those financial entities that on a voluntary basis notify in accordance with the first subparagraph may also transmit that notification to the CSIRTs designated or established in accordance with Directive (EU) 2022/2555.
In the case of a significant cyber threat, financial entities shall, where applicable, inform their clients that are potentially affected of any appropriate protection measures which the latter may consider taking.
(a) an initial notification;
(b) an intermediate report after the initial notification referred to in point (a), as soon as the status of the original incident has changed significantly or the handling of the major ICT-related incident has changed based on new information available, followed, as appropriate, by updated notifications every time a relevant status update is available, as well as upon a specific request of the competent authority;
(c) a final report, when the root cause analysis has been completed, regardless of whether mitigation measures have already been implemented, and when the actual impact figures are available to replace estimates.
Financial entities may outsource, in accordance with Union and national sectoral law, the reporting obligations under this Article to a third-party service provider. In case of such outsourcing, the financial entity remains fully responsible for the fulfilment of the incident reporting requirements.
Upon receipt of the initial notification and of each report referred to in paragraph 4, the competent authority shall, in a timely manner, provide details of the major ICT-related incident to the following recipients based, as applicable, on their respective competences:
(a) EBA, ESMA or EIOPA;
(b) the ECB, in the case of financial entities referred to in Article 2(1), points (a), (b) and (d);
(c) the competent authorities, single points of contact or CSIRTs designated or established in accordance with Directive (EU) 2022/2555;
(d) the resolution authorities, as referred to in Article 3 of Directive 2014/59/EU, and the Single Resolution Board (SRB) with respect to entities referred to in Article 7(2) of Regulation (EU) No 806/2014 of the European Parliament and of the Council (37), and with respect to entities and groups referred to in Article 7(4)(b) and (5) of Regulation (EU) No 806/2014 if such details concern incidents that pose a risk to ensuring critical functions within the meaning of Article 2(1), point (35), of Directive 2014/59/EU; and
(e) other relevant public authorities under national law.
Following receipt of information in accordance with paragraph 6, EBA, ESMA or EIOPA and the ECB, in consultation with ENISA and in cooperation with the relevant competent authority, shall assess whether the major ICT-related incident is relevant for competent authorities in other Member States. Following that assessment, EBA, ESMA or EIOPA shall, as soon as possible, notify relevant competent authorities in other Member States accordingly. The ECB shall notify the members of the European System of Central Banks on issues relevant to the payment system. Based on that notification, the competent authorities shall, where appropriate, take all of the necessary measures to protect the immediate stability of the financial system.
The notification to be done by ESMA pursuant to paragraph 7 of this Article shall be without prejudice to the responsibility of the competent authority to urgently transmit the details of the major ICT-related incident to the relevant authority in the host Member State, where a central securities depository has significant cross-border activity in the host Member State, the major ICT-related incident is likely to have severe consequences for the financial markets of the host Member State and where there are cooperation arrangements among competent authorities related to the supervision of financial entities.
The ESAs, through the Joint Committee, and in consultation with ENISA and the ECB, shall develop:
(a) common draft regulatory technical standards in order to:
(i) establish the content of the reports for major ICT-related incidents in order to reflect the criteria laid down in Article 18(1) and incorporate further elements, such as details for establishing the relevance of the reporting for other Member States and whether it constitutes a major operational or security payment-related incident or not;
(ii) determine the time limits for the initial notification and for each report referred to in Article 19(4);
(iii) establish the content of the notification for significant cyber threats.
When developing those draft regulatory technical standards, the ESAs shall take into account the size and the overall risk profile of the financial entity, and the nature, scale and complexity of its services, activities and operations, and in particular, with a view to ensuring that, for the purposes of this paragraph, point (a), point (ii), different time limits may reflect, as appropriate, specificities of financial sectors, without prejudice to maintaining a consistent approach to ICT-related incident reporting pursuant to this Regulation and to Directive (EU) 2022/2555. The ESAs shall, as applicable, provide justification when deviating from the approaches taken in the context of that Directive;
The ESAs shall submit the common draft regulatory technical standards referred to in the first paragraph, point (a), and the common draft implementing technical standards referred to in the first paragraph, point (b), to the Commission by 17 July 2024.
Power is delegated to the Commission to supplement this Regulation by adopting the common regulatory technical standards referred to in the first paragraph, point (a), in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
Power is conferred on the Commission to adopt the common implementing technical standards referred to in the first paragraph, point (b), in accordance with Article 15 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
The ESAs, through the Joint Committee, and in consultation with the ECB and ENISA, shall prepare a joint report assessing the feasibility of further centralisation of incident reporting through the establishment of a single EU Hub for major ICT-related incident reporting by financial entities. The joint report shall explore ways to facilitate the flow of ICT-related incident reporting, reduce associated costs and underpin thematic analyses with a view to enhancing supervisory convergence.
The joint report referred to in paragraph 1 shall comprise at least the following elements:
(a) prerequisites for the establishment of a single EU Hub;
(b) benefits, limitations and risks, including risks associated with the high concentration of sensitive information;
(c) the necessary capability to ensure interoperability with regard to other relevant reporting schemes;
(d) elements of operational management;
(e) conditions of membership;
(f) technical arrangements for financial entities and national competent authorities to access the single EU Hub;
(g) a preliminary assessment of financial costs incurred by setting-up the operational platform supporting the single EU Hub, including the requisite expertise.
Without prejudice to the technical input, advice or remedies and subsequent follow-up which may be provided, where applicable, in accordance with national law, by the CSIRTs under Directive (EU) 2022/2555, the competent authority shall, upon receipt of the initial notification and of each report as referred to in Article 19(4), acknowledge receipt and may, where feasible, provide in a timely manner relevant and proportionate feedback or high-level guidance to the financial entity, in particular by making available any relevant anonymised information and intelligence on similar threats, and may discuss remedies applied at the level of the financial entity and ways to minimise and mitigate adverse impact across the financial sector. Without prejudice to the supervisory feedback received, financial entities shall remain fully responsible for the handling and for consequences of the ICT-related incidents reported pursuant to Article 19(1).
The ESAs shall, through the Joint Committee, on an anonymised and aggregated basis, report yearly on major ICT-related incidents, the details of which shall be provided by competent authorities in accordance with Article 19(6), setting out at least the number of major ICT-related incidents, their nature and their impact on the operations of financial entities or clients, remedial actions taken and costs incurred.
The ESAs shall issue warnings and produce high-level statistics to support ICT threat and vulnerability assessments.
The requirements laid down in this Chapter shall also apply to operational or security payment-related incidents and to major operational or security payment-related incidents, where they concern credit institutions, payment institutions, account information service providers, and electronic money institutions.
For the purpose of assessing preparedness for handling ICT-related incidents, of identifying weaknesses, deficiencies and gaps in digital operational resilience, and of promptly implementing corrective measures, financial entities, other than microenterprises, shall, taking into account the criteria set out in Article 4(2), establish, maintain and review a sound and comprehensive digital operational resilience testing programme as an integral part of the ICT risk-management framework referred to in Article 6.
The digital operational resilience testing programme shall include a range of assessments, tests, methodologies, practices and tools to be applied in accordance with Articles 25 and 26.
When conducting the digital operational resilience testing programme referred to in paragraph 1 of this Article, financial entities, other than microenterprises, shall follow a risk-based approach taking into account the criteria set out in Article 4(2) duly considering the evolving landscape of ICT risk, any specific risks to which the financial entity concerned is or might be exposed, the criticality of information assets and of services provided, as well as any other factor the financial entity deems appropriate.
Financial entities, other than microenterprises, shall ensure that tests are undertaken by independent parties, whether internal or external. Where tests are undertaken by an internal tester, financial entities shall dedicate sufficient resources and ensure that conflicts of interest are avoided throughout the design and execution phases of the test.
Financial entities, other than microenterprises, shall establish procedures and policies to prioritise, classify and remedy all issues revealed throughout the performance of the tests and shall establish internal validation methodologies to ascertain that all identified weaknesses, deficiencies or gaps are fully addressed.
Financial entities, other than microenterprises, shall ensure, at least yearly, that appropriate tests are conducted on all ICT systems and applications supporting critical or important functions.
The digital operational resilience testing programme referred to in Article 24 shall provide, in accordance with the criteria set out in Article 4(2), for the execution of appropriate tests, such as vulnerability assessments and scans, open source analyses, network security assessments, gap analyses, physical security reviews, questionnaires and scanning software solutions, source code reviews where feasible, scenario-based tests, compatibility testing, performance testing, end-to-end testing and penetration testing.
Central securities depositories and central counterparties shall perform vulnerability assessments before any deployment or redeployment of new or existing applications and infrastructure components, and ICT services supporting critical or important functions of the financial entity.
Microenterprises shall perform the tests referred to in paragraph 1 by combining a risk-based approach with a strategic planning of ICT testing, by duly considering the need to maintain a balanced approach between the scale of resources and the time to be allocated to the ICT testing provided for in this Article, on the one hand, and the urgency, type of risk, criticality of information assets and of services provided, as well as any other relevant factor, including the financial entity’s ability to take calculated risks, on the other hand.
Financial entities, other than entities referred to in Article 16(1), first subparagraph, and other than microenterprises, which are identified in accordance with paragraph 8, third subparagraph, of this Article, shall carry out at least every 3 years advanced testing by means of TLPT. Based on the risk profile of the financial entity and taking into account operational circumstances, the competent authority may, where necessary, request the financial entity to reduce or increase this frequency.
Each threat-led penetration test shall cover several or all critical or important functions of a financial entity, and shall be performed on live production systems supporting such functions.
Financial entities shall identify all relevant underlying ICT systems, processes and technologies supporting critical or important functions and ICT services, including those supporting the critical or important functions which have been outsourced or contracted to ICT third-party service providers.
Financial entities shall assess which critical or important functions need to be covered by the TLPT. The result of this assessment shall determine the precise scope of TLPT and shall be validated by the competent authorities.
Where ICT third-party service providers are included in the scope of TLPT, the financial entity shall take the necessary measures and safeguards to ensure the participation of such ICT third-party service providers in the TLPT and shall retain at all times full responsibility for ensuring compliance with this Regulation.
Without prejudice to paragraph 2, first and second subparagraphs, where the participation of an ICT third-party service provider in the TLPT, referred to in paragraph 3, is reasonably expected to have an adverse impact on the quality or security of services delivered by the ICT third-party service provider to customers that are entities falling outside the scope of this Regulation, or on the confidentiality of the data related to such services, the financial entity and the ICT third-party service provider may agree in writing that the ICT third-party service provider directly enters into contractual arrangements with an external tester, for the purpose of conducting, under the direction of one designated financial entity, a pooled TLPT involving several financial entities (pooled testing) to which the ICT third-party service provider provides ICT services.
That pooled testing shall cover the relevant range of ICT services supporting critical or important functions contracted to the respective ICT third-party service provider by the financial entities. The pooled testing shall be considered TLPT carried out by the financial entities participating in the pooled testing.
The number of financial entities participating in the pooled testing shall be duly calibrated taking into account the complexity and types of services involved.
Financial entities shall, with the cooperation of ICT third-party service providers and other parties involved, including the testers but excluding the competent authorities, apply effective risk management controls to mitigate the risks of any potential impact on data, damage to assets, and disruption to critical or important functions, services or operations at the financial entity itself, its counterparts or to the financial sector.
At the end of the testing, after reports and remediation plans have been agreed, the financial entity and, where applicable, the external testers shall provide to the authority, designated in accordance with paragraph 9 or 10, a summary of the relevant findings, the remediation plans and the documentation demonstrating that the TLPT has been conducted in accordance with the requirements.
Authorities shall provide financial entities with an attestation confirming that the test was performed in accordance with the requirements as evidenced in the documentation in order to allow for mutual recognition of threat led penetration tests between competent authorities. The financial entity shall notify the relevant competent authority of the attestation, the summary of the relevant findings and the remediation plans.
Without prejudice to such attestation, financial entities shall remain at all times fully responsible for the impact of the tests referred to in paragraph 4.
Credit institutions that are classified as significant in accordance with Article 6(4) of Regulation (EU) No 1024/2013, shall only use external testers in accordance with Article 27(1), points (a) to (e).
Competent authorities shall identify financial entities that are required to perform TLPT taking into account the criteria set out in Article 4(2), based on an assessment of the following:
(a) impact-related factors, in particular the extent to which the services provided and activities undertaken by the financial entity impact the financial sector;
(b) possible financial stability concerns, including the systemic character of the financial entity at Union or national level, as applicable;
(c) specific ICT risk profile, level of ICT maturity of the financial entity or technology features involved.
Member States may designate a single public authority in the financial sector to be responsible for TLPT-related matters in the financial sector at national level and shall entrust it with all competences and tasks to that effect.
In the absence of a designation in accordance with paragraph 9 of this Article, and without prejudice to the power to identify the financial entities that are required to perform TLPT, a competent authority may delegate the exercise of some or all of the tasks referred to in this Article and Article 27 to another national authority in the financial sector.
The ESAs shall, in agreement with the ECB, develop joint draft regulatory technical standards in accordance with the TIBER-EU framework in order to specify further:
(a) the criteria used for the purpose of the application of paragraph 8, second subparagraph;
(b) the requirements and standards governing the use of internal testers;
(c) the requirements in relation to:
(i) the scope of TLPT referred to in paragraph 2;
(ii) the testing methodology and approach to be followed for each specific phase of the testing process;
(iii) the results, closure and remediation stages of the testing;
(d) the type of supervisory and other relevant cooperation which are needed for the implementation of TLPT, and for the facilitation of mutual recognition of that testing, in the context of financial entities that operate in more than one Member State, to allow an appropriate level of supervisory involvement and a flexible implementation to cater for specificities of financial sub-sectors or local financial markets.
When developing those draft regulatory technical standards, the ESAs shall give due consideration to any specific feature arising from the distinct nature of activities across different financial services sectors.
The ESAs shall submit those draft regulatory technical standards to the Commission by 17 July 2024.
Power is delegated to the Commission to supplement this Regulation by adopting the regulatory technical standards referred to in the first subparagraph in accordance with Articles 10 to 14 of Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010.
(a) are of the highest suitability and reputability;
(b) possess technical and organisational capabilities and demonstrate specific expertise in threat intelligence, penetration testing and red team testing;
(c) are certified by an accreditation body in a Member State or adhere to formal codes of conduct or ethical frameworks;
(d) provide an independent assurance, or an audit report, in relation to the sound management of risks associated with the carrying out of TLPT, including the due protection of the financial entity’s confidential information and redress for the business risks of the financial entity;
(e) are duly and fully covered by relevant professional indemnity insurances, including against risks of misconduct and negligence.
(a) such use has been approved by the relevant competent authority or by the single public authority designated in accordance with Article 26(9) and (10);
(b) the relevant competent authority has verified that the financial entity has sufficient dedicated resources and ensured that conflicts of interest are avoided throughout the design and execution phases of the test; and
(c) the threat intelligence provider is external to the financial entity.
(a) financial entities that have in place contractual arrangements for the use of ICT services to run their business operations shall, at all times, remain fully responsible for compliance with, and the discharge of, all obligations under this Regulation and applicable financial services law;
(b) financial entities’ management of ICT third-party risk shall be implemented in light of the principle of proportionality, taking into account:
(i) the nature, scale, complexity and importance of ICT-related dependencies,
(ii) the risks arising from contractual arrangements on the use of ICT services concluded with ICT third-party service providers, taking into account the criticality or importance of the respective service, process or function, and the potential impact on the continuity and availability of financial services and activities, at individual and at group level.
As part of their ICT risk management framework, financial entities, other than entities referred to in Article 16(1), first subparagraph, and other than microenterprises, shall adopt, and regularly review, a strategy on ICT third-party risk, taking into account the multi-vendor strategy referred to in Article 6(9), where applicable. The strategy on ICT third-party risk shall include a policy on the use of ICT services supporting critical or important functions provided by ICT third-party service providers and shall apply on an individual basis and, where relevant, on a sub-consolidated and consolidated basis. The management body shall, on the basis of an assessment of the overall risk profile of the financial entity and the scale and complexity of the business services, regularly review the risks identified in respect to contractual arrangements on the use of ICT services supporting critical or important functions.
As part of their ICT risk management framework, financial entities shall maintain and update at entity level, and at sub-consolidated and consolidated levels, a register of information in relation to all contractual arrangements on the use of ICT services provided by ICT third-party service providers.
The contractual arrangements referred to in the first subparagraph shall be appropriately documented, distinguishing between those that cover ICT services supporting critical or important functions and those that do not.
Financial entities shall report at least yearly to the competent authorities on the number of new arrangements on the use of ICT services, the categories of ICT third-party service providers, the type of contractual arrangements and the ICT services and functions which are being provided.
Financial entities shall make available to the competent authority, upon its request, the full register of information or, as requested, specified sections thereof, along with any information deemed necessary to enable the effective supervision of the financial entity.
Financial entities shall inform the competent authority in a timely manner about any planned contractual arrangement on the use of ICT services supporting critical or important functions as well as when a function has become critical or important.
(a) assess whether the contractual arrangement covers the use of ICT services supporting a critical or important function;
(b) assess if supervisory conditions for contracting are met;
(c) identify and assess all relevant risks in relation to the contractual arrangement, including the possibility that such contractual arrangement may contribute to reinforcing ICT concentration risk as referred to in Article 29;
(d) undertake all due diligence on prospective ICT third-party service providers and ensure throughout the selection and assessment processes that the ICT third-party service provider is suitable;
(e) identify and assess conflicts of interest that the contractual arrangement may cause.
Financial entities may only enter into contractual arrangements with ICT third-party service providers that comply with appropriate information security standards. When those contractual arrangements concern critical or important functions, financial entities shall, prior to concluding the arrangements, take due consideration of the use, by ICT third