We've designed an AI chatbot that provides financial advice based on users' spending habits, earnings, and long-term goals. The bot also offers insights on market trends and investment opportunities. What considerations are critical when offering financial advice through an AI system?

Gist 1

The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques… (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons… (Article 5)

When providing financial advice through your AI chatbot, It’s crucial to ensure that the AI system does not employ manipulative or subliminal techniques that influence an individual’s decision-making process, nor should it exploit an individual’s vulnerabilities. The AI system should focus on delivering unbiased and clear guidance.

Prior to putting a high-risk AI system… into use, deployers shall conduct an assessment of the systems’ impact in the specific context of use. This assessment shall include, at a minimum, the following elements: (a) a clear outline of the intended purpose for which the system will be used; … (d) verification that the use of the system is compliant with relevant Union and national law on fundamental rights; … (f) specific risks of harm likely to impact marginalized persons or vulnerable groups; … (j) the governance system the deployer will put in place, including human oversight, complaint-handling, and redress. (Article 29a)

According to Article 29a, the deployment of a high-risk AI system requires a detailed pre-use assessment. This assessment should outline the system’s intended purpose, compliance with relevant laws, risks to vulnerable groups, and a governance system in place, including human oversight.

Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself, or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear, and intelligible manner unless this is obvious from the circumstances and the context of use. (Article 52)

Transparency is a legal obligation. Users must be clearly and understandably informed of their interaction with an AI system, as mandated by Article 52. Thus, your users should be aware that they are receiving financial advice from an AI system.

Without prejudice to any other administrative or non-judicial remedy, each natural or legal person shall have the right to an effective judicial remedy against a legally binding decision of a national supervisory authority concerning them. (Article 68b)

Under Article 68b, users of your AI system have a right to a judicial remedy against decisions of the national supervisory authority that concern them. This ensures the protection of user rights.

AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, except of AI systems used for the purpose of detecting financial fraud. (Annex III: 5.b)

As per Annex III, if your AI chatbot makes or materially affects credit decisions, it may be classified as a high-risk system. This classification carries additional obligations.

Union law on financial services includes internal governance and risk management rules and requirements which are applicable to regulated financial institutions in the course of provision of those services, including when they make use of AI systems. (Recital 80)

As per Recital 80, when providing financial advice, it is imperative to ensure compliance with the internal governance and risk management requirements specified in the Union law on financial services.

In conclusion, while providing financial advice through an AI system, considerations related to prohibition of manipulative techniques, pre-use assessment for high-risk systems, transparency, right to a judicial remedy, possible classification as a high-risk system, and compliance with existing requirements are vital. This analysis does not substitute legal advice and it may be beneficial to consult with a legal expert on these topics.

Gist 2

The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour … (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a person or a specific group of persons, … (Article 5)

This Article establishes that AI systems which deploys subliminal techniques, manipulates, or exploits vulnerabilities of persons are prohibited. Therefore, it is paramount that your AI chatbot provides financial advice that is clear, honest, and unambiguous, ensuring that no manipulative techniques or exploitation of vulnerabilities occur.

High-risk AI systems shall be developed on the basis of training, validation and testing data sets that meet the quality criteria … (Article 10)

If your AI chatbot is classified as a high-risk AI system, it needs to use high-quality training, validation, and testing data sets. This means applying good data governance and ensuring that the datasets used are relevant, representative, comprehensive, and of high quality.

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning. (Article 13)

In terms of transparency, the working mechanics of your AI chatbot need to be reasonable and comprehensible for both providers and users. This requirement is designed to build trust with users and ensure they can make informed decisions based on the advice provided by the AI system.

AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score , with the exception of AI systems used for the purpose of detecting financial fraud. (Annex III point 5b)

Given its intended use, your chatbot may be classified as a high-risk AI system if it is used to assess individuals’ creditworthiness or determine their credit scores. This would mean upholding additional regulatory measures stipulated by the AI Act. If your system’s primary function is detecting financial fraud, it may be exempt from this high-risk classification.

AI systems intended to be used for making decisions or materially influencing decisions on the eligibility of natural persons for health and life insurance. (Annex III point 5ba)

This provision doesn’t directly apply to your AI chatbot, but it does signal the AI Act’s approach to decision-making in sensitive areas like financial matters. If your chatbot exerts significant influence on decisions regarding an individual’s eligibility for financial products, it could be considered a high-risk system.

A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems, throughout the entire lifecycle of the AI system. (Article 9)

Should your AI system be classified as high-risk, you will need to establish a comprehensive risk management system. This management system should be thoroughly documented and maintained throughout the AI system’s lifecycle to mitigate known and foreseeable risks and maintain system performance.

In conclusion, a detailed risk assessment is advised to ascertain full compliance of your AI chatbot with the AI Act and the potential need for additional safeguards or refinements to the system’s functionality to fulfil specific obligations set for high-risk AI systems.

Gist 3

The AI Act has several provisions that would apply to your AI chatbot that provides financial advice. I have identified key aspects from the Act that you should consider.

AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score , with the exception of AI systems used for the purpose of detecting financial fraud; (Annex III - 5(b))

If your chatbot is creating or suggesting credit scores or evaluating creditworthiness based on the financial data provided, then it might fall into the category of high-risk AI systems per Annex III.

High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 as far as this is technically feasible according to the specific market segment or scope of application. (Article 10)

As per Article 10, your AI chatbot needs to use training, validation, and testing datasets that meet certain quality criteria.

High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. (Article 15)

Given Article 15, your chatbot needs to ensure security by design and default. It is also expected to provide an appropriate level of accuracy, robustness, safety, and cybersecurity throughout its lifecycle.

Regarding prohibitions, Articles 5.1a and 5.1b emphasize the importance of not deploying manipulative or deceptive techniques that could significantly distort a user’s behavior. In particular:

the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm; (Article 5.1a)

Finally, Article 5.1c prohibits the use of AI for social scoring or classification of individuals that could lead to unfair treatment.

the placing on the market, putting into service or use of AI systems for the social scoring evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts that are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity; (Article 5.1c)

In conclusion, you need to consider technical aspects such as system robustness, data quality, and security, but equally important are the ethical aspects such as avoiding manipulation, deception, or exploitation of vulnerabilities. It’s recommended to consult with a legal professional to understand this complex regulatory landscape.

Gist 4

High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable providers and users to reasonably understand the system’s functioning. Appropriate transparency shall be ensured in accordance with the intended purpose of the AI system, with a view to achieving compliance with the relevant obligations of the provider and user set out in Chapter 3 of this Title. (Article 13)

This article emphasizes the importance of designing AI systems in a way that permits both providers and users to understand their operation. When applied to a chatbot dispensing financial advice, it means users need to comprehend how the advice they receive is generated. This transparency, according to the purpose of the AI system, helps to uphold key compliance obligations set out in the Act.

High-risk AI systems shall be accompanied by intelligible instructions for use in an appropriate digital format or made otherwise available in a durable medium that includes concise, correct, clear and to the extent possible complete information that helps operating and maintaining the AI system as well as supporting informed decision-making by users and is reasonably relevant, accessible and comprehensible to users. (Article 13)

Article 13 further dictates the need for high-risk AI systems to come with clear, intelligent instructions on their usage. Detailed information included should seek to aid users in understanding how to operate and maintain the AI system, and support informed decision-making. In the case of a financial advice chatbot, ensuring users fully understand the AI’s recommendations is crucial for meaningfully engaging with it.

High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons as proportionate to the risks associated with those systems. Natural persons in charge of ensuring human oversight shall have a sufficient level of AI literacy in accordance with Article 4b and the necessary support and authority to exercise that function during the period in which the AI system is in use and to allow for thorough investigation after an incident. (Article 14)

AI chatbots providing financial advice are urged to design their systems to allow effective oversight by humans. This implies that the individuals tasked with this oversight should have an appropriate level of AI literacy to effectively monitor the AI system and conduct any necessary investigations after any incident.

High-risk AI systems shall be designed and developed following the principle of security by design and by default. In the light of their intended purpose, they should achieve an appropriate level of accuracy, robustness, safety, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Compliance with these requirements shall include implementation of state-of-the-art measures, according to the specific market segment or scope of application. (Article 15)

The AI Act mandates that AI systems dispensing financial advice, seen as high-risk, should adopt a security-by-design approach. Such systems should demonstrate high levels of accuracy, robustness, safety, and cybersecurity, and maintain these qualities throughout their lifecycle. This requirement further implies that up-to-date protective measures should be implemented, fitting the system’s market context or application.

Access to and enjoyment of essential private services and public services and benefits: AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud; (Annex III, 5(b))

AI systems used to assess an individual’s creditworthiness or establish credit scores come under the ‘high-risk’ category, with the exception of systems meant for detecting financial fraud. This means your chatbot providing financial advice should adhere to the stringent requirements set out for ‘high-risk’ AI systems.

The placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour by appreciably impairing the person’s ability to make an informed decision, thereby causing the person to take a decision that that person would not have otherwise taken in a manner that causes or is likely to cause that person, another person or group of persons significant harm. (Article 5 (a))

The Act prohibits AI systems from deploying subliminal, manipulative or deceptive techniques that could influence a person’s financial decisions. Consequently, your AI chatbot should avoid approaches that may impair a person’s ability to make informed decisions, leading to potential harm.

This Article shall not affect the prohibitions that apply where an artificial intelligence practice infringes another Union law, including the Union law on data protection, non-discrimination, consumer protection, or competition. (Article 5 (1a))

Finally, while adhering to the provisions of the AI Act regarding prohibited practices, it must also respect other EU laws. These include regulations relating to data protection, non-discrimination, consumer protection, and competition.