Artificial intelligence is playing an increasingly important role in the banking sector, accelerating processes, supporting data analysis, and enabling advanced personalization options. With its rapid growth comes the need for regulations to ensure its safe and responsible use.
The AI Act, adopted by the European Parliament, is a crucial piece of legislation that defines rules for the responsible and ethical use of artificial intelligence, imposes obligations on institutions utilizing it, and sets compliance standards; however, it is not the only regulation that is relevant to financial institutions in this regard.
In this article, we will look at different AI legislation, the consequences of non-compliance, and the challenges of implementing new regulations.
While the EU Artificial Intelligence Act is the first comprehensive regulation governing the use of AI technology, we will find other regulations in the current legislation that are applicable in this area. These include other general regulations adopted by the EU, such as the Digital Operational Resilience Act (DORA) and the General Data Protection Regulation (GDPR), which also apply to entities both inside and outside the European Union if their activities affect the EU market.
This section focuses on the provisions of the aforementioned regulations that are relevant to this article and how they complement the provisions of the EU AI Act.
Regardless of the risk level specified for a system supplied or used by a given entity, the AI Act imposes a disclosure obligation on this entity related to appropriately informing persons who may be affected by this system. In this case, the entity is required to:
While the General Data Protection Regulation (GDPR) focuses on securing personal data, the EU Artificial Intelligence Act regulates the responsible and secure way in which AI systems should use the data provided to them.
High-risk AI systems include solutions based on AI models trained on data, among which personal data may appear—in this situation, in addition to the requirements of the AI Act, regulations under the GDPR apply. What is more, the regulations include guidelines for the processing of biometric data and disclosure obligations concerning the use of AI and the processing of personal data.
Based on this data, artificial intelligence systems can also be used for automated decision-making in individual cases, such as decisions on granting loans. In this situation, under the AI Act, individuals are entitled to receive an explanation of the decisions made, while the GDPR gives them the right not to be subject to decisions based solely on automated data processing.
In both regulations, provisions for risk analysis can be found. Artificial intelligence systems are assigned to different categories, and in the case of processing personal data, risk assessment provides a basis for ensuring measures to protect this data. Both the AI Act and the GDPR also impose an obligation to log activities or events in certain situations, as well as to report breaches and security-relevant incidents, such as system malfunctions. It should be noted here that a data breach can occur as a result of an activity utilizing artificial intelligence.
In this way, both regulations emphasize the importance of transparency in the activities of the entities involved.
Both the Digital Operational Resilience Act (DORA) and the EU AI Act address overlapping areas such as risk management, data management, and cybersecurity, with a strong focus on data integrity and confidentiality. The DORA seeks to ensure the security of information and communications technology (ICT), which helps guarantee the continuity of services provided by financial institutions. Among ICT elements, AI systems can also be found.
Both regulations refer repeatedly to the converging concepts of “providers of” or “entities using” ICT services or AI technologies. “AI systems,” as defined by the AI Act, fit into the DORA’s definition of “ICT services,” particularly in the case of high-risk AI systems. Another common issue mentioned by both regulations is the requirement to refine the competences and knowledge of entities providing and using ICT and AI technologies. Human oversight of systems and processes is also required. Both the DORA and the AI Act also call for the introduction of risk management procedures.
Implementing the provisions of both acts in financial services will ensure that AI-based banking systems meet appropriate operational and ethical standards.
The EU Member States are required to appoint their own authorities responsible for overseeing compliance with the AI Act by August 1, 2025.
Those authorities will be tasked with creating a coherent legal framework for the development, deployment, and oversight of AI technologies for the purpose of fostering innovation and ensuring the safety of individuals and institutions. They will cooperate with the European Commission, the European Artificial Intelligence Council, and other local authorities with relevant scopes of responsibility, such as data protection or financial supervision.
Another area where the AI Act imposes obligations on every EU member state is the establishment of at least one regulatory sandbox by August 2, 2026. A regulatory sandbox is an environment established by competent authorities that allows suppliers to safely test AI technologies and solutions in real-world conditions for a specified period of time.
For an overview of intersections between the AI Act and Polish laws applicable in banking, see this article.
As improper or inadequately supervised use of artificial intelligence technology can result in security breaches, compliance with regulations adopted by the European Parliament is crucial. Penalties for non-compliance with the EU Artificial Intelligence Act depend on the severity of the violation:
The implementation of EU AI Act compliance in banking requires that organizations find ways to ensure adherence to stringent regulations while maintaining operational efficiency.
The related regulations listed in the sections above must also be taken into account. They come with a set of distinct requirements, and ensuring alignment without conflict can pose a challenge, especially in terms of possible misinterpretation, which can lead to inefficiencies or even complete non-compliance.
In many cases, legacy systems might need to be updated, which might be associated with technical challenges. Additionally, employees must be trained accordingly in order to be able to utilize, manage, and supervise AI systems in a way that ensures the appropriateness and security of operations. This generally requires a major investment of resources.
Addressing all of the requirements simultaneously can strain organizations’ resources. Therefore, it is important to address potential difficulties in a proactive manner, through cross-departmental collaboration and the utilization of compliance-supporting technologies, such as data governance platforms or automated auditing tools.
AI is here to stay, and its utilization is only expected to increase, including its application in the financial services sector. Given the risks and challenges associated with this technology, it is clear that it is of the utmost importance that organizations develop and apply an AI strategy based on relevant provisions introduced by the AI Act and related regulations.
Adherence to the provisions of the AI Act:
In this way, banks can successfully optimize their operations with minimal risk of inaccuracies or infringements, at the same time building trust in their competence among the consumers of financial services.
The AI Act regulates the use of artificial intelligence in the banking sector, introducing new obligations and rules to ensure the security, transparency, and compliance of AI technology with legal standards. While the Act imposes stringent requirements, it also creates opportunities for banks to build customer trust and foster innovation.
The rapid growth of technology brings both new possibilities and largely unpredictable risks, which means that the evolving regulations must be constantly monitored in a rapidly changing regulatory landscape.