Banking in the age of artificial intelligence: insights on the AI Act in the financial sector

Eximee Team
Published 09 Jun, 2025

In recent years, artificial intelligence (AI) has rapidly grown in importance, playing a major role in the lives of people and the functioning of organizations, transforming everyday activities and business operations alike. While the use of AI systems can be beneficial in a number of areas, it has raised many concerns and provoked discussions around its regulation.

In the face of those challenges, the European Parliament has adopted a regulation on artificial intelligence, known as the AI Act, which is the first conclusive and concrete piece of legislation seeking to regulate the use of AI systems across different markets, including the already highly regulated banking sector, where financial and personal information must be responsibly handled and efficiently protected.

What is the European AI Act and what is its significance in banking? These are the questions that this article will attempt to answer.

The areas of AI application in banking

There are a number of banking processes that currently utilize AI and will likely continue to employ those technologies in the future. Among those, the following can be identified:

  • Customer due diligence (CDD): identity and risk profile verification for the purpose of evaluating relevant facts about an individual or business to mitigate risks associated with financial crimes.
  • Credit scoring: creditworthiness assessment based on algorithms.
  • Fraud detection: transaction pattern analysis for the purpose of recognizing inconsistencies and identifying fraudulent operations.
  • Process automation: workload optimization through the automation of repetitive manual tasks.
  • AI customer assistance: customer support automation with the use of chatbots, robo-advisors, and personal AI assistants.
  • Service personalization: tailored communications and offers based on specific customer characteristics and previous activities.
  • Customer rating systems: systems used to collect and process customer feedback in order to measure customer satisfaction and subsequently enhance customer experiences.

In those processes, language processing models are utilized in order to analyze and systemize large volumes of data in ways that are generally quicker and more precise as opposed to manual processing. Therefore, all of the above fall under the purview of the AI Act.

Why does the use of AI need to be regulated?

AI can be used for a number of purposes, from entertainment to highly-specialized business applications. Due to the fact that it has the potential to imitate, simulate, and augment reality, its unethical use is associated with a number of risks, and can result in deceptive media (deepfakes), algorithmic bias, and privacy violations. To prevent those threats, clear guidelines on responsible AI distribution and consumption need to be established.

AI systems and models according to the AI Act

The AI Act defines an AI system as a machine-based system designed to operate at different levels of autonomy. Its other characteristics include adaptability after deployment and the ability to infer how to generate results based on inputs. These results can take the form of predictions, content, recommendations, or decisions affecting the physical or virtual environment.

At the same time, the act does not define an AI model. It only mentions “general-purpose AI models,” which are trained on large data sets and can perform a wide range of different tasks.

The key elements defining AI systems

The act notes that determining whether a piece of software is an AI system should be based on an assessment of its architecture and functionality, taking into account the seven elements of the definition:

  1. Required condition: a machine-based system. An AI system must be built and run on machines both in the hardware sense and in the software sense.
  2. Required condition: a system designed with varying levels of autonomy. Autonomy is defined here in the context of human-machine interaction, and an AI system in this case is understood as a system that can operate regardless of whether a human is involved in its functioning. The manual provision of input data for the system to generate a result does not take away the autonomy of the system, and therefore, only tools that require full human involvement in their operation fail to meet this condition.
  3. Optional condition: the system can exhibit adaptability. The system may also display self-learning capabilities during use after implementation. As a result, it may generate different results based on the same input data.
  4. Required condition: the purpose of the system is explicit or implicit. Implicit goals are those that have not been explicitly defined by the developer, but may result from factors such as the system’s interaction with the environment. This condition is always met in practice.
  5. Required condition: the system generates outputs. The system has the ability to infer how to generate outputs by being able to create AI models or algorithms based on input data. The fulfillment of this condition is enabled by certain modeling techniques.
  6. Required condition: types of results generated. AI tools use sophisticated modeling systems and complex patterns to generate results that are categorized as predictions, content, recommendations, or decisions. This condition is always met in practice.
  7. Optional condition: the results can affect the physical or virtual environment. This means that results generated by AI systems are applied in real processes, an aspect that is particularly important in the process of assessing risks associated with the use of artificial intelligence. This condition is always met in practice.

The above conditions may occur selectively at the stage of building or using the system.

Classification of modeling methods

Modeling methods are mathematical and algorithmic techniques for representing real processes in the form of models that enable their analysis, prediction, or optimization. Depending on the approach, they can be based on rules and formulas or on data and self-learning.

The following simplified classification is based on the April 2025 summary of the activities of the AI working group at the Banking Technologies Forum.

Classical modeling follows fixed rules and formulas created by humans, while artificial intelligence is based on algorithms with a learning component for creating systems capable of performing tasks that typically require human intelligence.

Classification of modeling methods

Classification of modeling methods

Machine learning (ML) methods partially fit into both of the above categories, but their classification will not always be straightforward. While traditional ML models, such as linear regression or decision trees, are based on fixed mathematical rules and specific logic, there are more sophisticated methods that learn patterns based on feedback.

Deep learning (DL) methods are a subset of machine learning. They use multilayer neural networks to analyze complex patterns and large data sets. They are used in audio analysis, image recognition, or natural language processing (NLP).

Generative artificial intelligence (GenAI) is a subset of deep learning that includes methods that learn from patterns drawn from large data sets to generate similar content, such as images, videos, or sounds.

Each of the above modeling methods has different capabilities, applications, and limitations. The appropriate classification of AI systems allows them to be used efficiently and responsibly, helping reduce potential security risks.

What are the provisions of the AI Act?

The AI Act seeks to provide for the safe use of artificial intelligence with the simultaneous preservation of fundamental human rights. At the same time, another objective of this regulation is to stimulate innovation and encourage investment in AI. It is therefore clear that the act is not designed to prohibit or limit the use of AI, but instead encourage its responsible application.

This regulation defines the different parties involved in AI application, from providers through deployers, importers, and distributors to manufacturers, and its provisions apply to all entities that fulfill at least one of the below requirements:

  • entities established or located in the EU that use AI systems for business purposes,
  • entities established or located either in or outside the EU supplying AI tools to the EU market,
  • entities established or located either in or outside the EU that use AI systems for business purposes, where the output of those systems is used within the European Union.

It is important to note that the EU AI Act is “horizontal” in its scope, which means that instead of addressing individual industries such as the banking sector or specific types of AI systems, the regulation has a broad application focusing on the use of artificial intelligence in different areas and territories. In this way, it resembles the GDPR and DORA legislations, as those are also general in nature.

The tiered system of regulatory requirements

AI systems are now used for a variety of different purposes, and so, to regulate their utilization, the European legislator has organized the application of different regulations based on tiers expressing the level of risk associated with a tool or its usage.

  • Unacceptable risk. Within this category, systems based on social scoring as well as ones using real-time remote biometric identification are included. The use of those is strictly prohibited under the AI Act.
  • High risk. Before entering the market, high-risk systems must undergo a conformity assessment, and their use is permitted if the operating entity ensures appropriate risk management processes, data governance procedures, and cybersecurity measures. In banking, systems within this category include AI credit scoring models and fraud detection systems.
  • Limited risk. Solutions improving customer experience such as chatbots as well as generative AI systems are subject to transparency requirements, which means that the user must be advised that the content that they are viewing or interacting with is AI-generated.

Minimal or no risk. Although no mandatory requirements are prescribed for this category, the AI Act encourages that all parties involved in the production, distribution, and deployment of minimal-risk AI systems adhere to relevant codes of conduct.

Levels of risk

Levels of risk

In this way, the EU AI Act covers a variety of different systems and solutions utilized across a multitude of industries, providing guidance on their safe and responsible use.

Criteria for classifying practices prohibited by the AI Act in the context of financial entities

While classifying AI-based systems in terms of the risks involved, it is necessary to look carefully at the context in which a given solution is to be used. Here, it will be necessary to analyze what impact it may have on individuals and their rights, taking into account the risk of potential harm.

Practices prohibited under the AI Act regulations include profiling individuals to predict or assess their risk of committing a crime, categorizing individuals based on their biometric data for sensitive characteristics, and drawing inferences about an individual’s emotions based on biometric data.

Among the prohibited practices one can also identify the use of manipulation or subliminal techniques to take advantage of the vulnerability of individuals or groups of individuals based on their age, disability, or social or economic situation. A relevant example of an AI system that might be used in banking could be a system for personalizing financial offers using biased data; for instance, a system basing processes on information about customer age groups could offer financial products on unfavorable terms to the elderly, causing them financial harm. It is therefore important to ensure that the data and tools utilized by banks in AI-assisted processes are used responsibly and appropriately for the intended purposes.

Among mechanisms banned by the AI Act that are less likely to find application in banking, social scoring and untargeted scraping can be listed.

Criteria for classifying high-risk systems in the context of financial entities

The vast majority of obligations under the AI Act apply to high-risk systems, a category into which most artificial intelligence applications in banking also fall.

Among the criteria for classifying AI systems as high-risk solutions one can list the use of remote biometric identification systems, including AI tools used for biometric categorization based on sensitive characteristics. In the area of employment and employee management, systems used in recruitment processes or for the purpose of assigning tasks to bank employees will also fall into this category. Another application of AI systems in this category concerns ensuring the security of critical digital infrastructure.

In terms of serving the bank’s customers, among high-risk systems we will find tools used to assess creditworthiness and, less frequently, tools used to assess insurance risk. An example is a real-time credit risk monitoring system, which, through self-adaptation based on new data, updates assessment parameters and makes decisions based on the results generated.

Establishment of the European Artificial Intelligence Office

The AI Act has also provided for the establishment of the European AI Office, an organization overseeing the adoption of the act’s provisions that comprises representatives of EU member states. The Office is tasked with fostering the development and responsible use of AI technology while ensuring protection against associated risks. Its responsibilities include enforcing rules of conduct for general-purpose AI models. The institution also fulfills an advisory role, assisting the governing bodies of member states in carrying out their tasks under the AI Act.

To a certain extent, the supervision and enforcement of the AI Act is to be entrusted to bodies responsible for a given industry. In the case of banking and financial services, these will be the relevant financial services authorities in the member states, together with the European Supervisory Authorities (ESAs): the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA), and the European Insurance and Occupational Pensions Authority (EIOPA).

The AI Act timeline

Given its prescriptive character, immediately implementing measures ensuring adherence to the AI Act would generally not be possible for most organizations. For this reason, its implementation has been phased in over a three-year period.

  • The AI Act entered into force on August 1, 2024, giving organizations time to adjust to its provisions over time.
  • General provisions and a ban on the use of unacceptable-risk AI systems became applicable on February 1, 2025. This is the point at which banks should have already established procedures for qualifying AI systems and preventing practices prohibited by the act.
  • For general purpose AI systems, the application of codes of practice should have been completed by May 1, 2025.
  • The EU Member States are required to appoint their own authorities tasked with the responsibility of overseeing compliance with the AI Act by August 1, 2025. Administrative penalties will also begin to apply.
  • August 1, 2026 will be an important date for the banking sector, as it is when provisions related to the application of high-risk AI systems will come into force, and most AI systems used in financial services fall into this category. Disclosure obligations associated with the use of AI will become applicable as well.

Financial institutions in the face of the development of AI technologies

There are many opportunities and process optimization possibilities associated with the efficient use of artificial intelligence in banking. At the same time, its irresponsible and incompetent use can carry serious consequences. It is therefore important for financial institutions to be able to identify risks associated with the implementation of modern AI-based solutions and prevent them.

The skillful interpretation of regulations and applications for the AI Act in banking is a fundamental step toward ensuring regulatory compliance, enhancing the security and transparency of AI systems, minimizing legal risks, and building customer trust in technologies used in financial services.

  • Corpo
  • Trends

Authors

Eximee Team