The EU AI Act: Shaping the Future of Financial Technology
Date: September 30 2025
Author: Michael Borrelli, Director, AI & Partners
The EU AI Act, which entered into force on 1st August 2024, represents a significant step toward regulating artificial intelligence within the European Union. With the rapid rise of AI technologies across various sectors, the Act aims to ensure that AI systems used within the EU are safe, transparent, and trustworthy. In this post, we explore the implications of the EU AI Act for the FinTech sector.
What is the EU AI Act?
The EU Artificial Intelligence Act is the first comprehensive legislative framework in the world designed to regulate AI technologies. It establishes clear guidelines to ensure AI systems are ethical, accountable, and beneficial to society, balancing innovation with protection. The law applies to both public and private sectors, including emerging industries such as FinTech, where AI plays a growing role in automation, decision-making, and risk management.
The Act categorizes AI systems into four levels of risk: minimal risk, limited risk, high risk, and unacceptable risk, with increasing obligations imposed based on the risk level. The higher the risk of the AI system to fundamental rights, the more stringent the requirements will be.
The EU AI Act in a Nutshell
The core objective of the EU AI Act is to ensure that AI systems comply with ethical standards, provide transparency, and do not harm individuals. Specifically, the Act includes the following key components:
- Risk Classification (Article 6): AI systems are categorized based on their risk to safety, privacy, and human rights. High-risk AI systems, like those used in finance, require rigorous oversight.
- Transparency & Accountability (Article 13): AI systems must be explainable and auditable, with clear documentation on how decisions are made, particularly in high-risk areas like credit scoring or fraud detection.
- Data Privacy & Protection (Article 10): Compliance with GDPR is paramount. AI systems must safeguard personal data and respect individuals’ privacy rights.
- Human Oversight (Article 14): AI decision-making processes must involve human oversight, ensuring that the technology doesn’t replace human judgment entirely, particularly in sensitive financial decisions.
- Conformity Assessment & Certification (Article 43): For high-risk AI systems, there are strict requirements for certification, regular audits, and updates to ensure compliance.
How the EU AI Act Affects FinTech
FinTech has rapidly adopted AI for a variety of use cases, including algorithmic trading, credit scoring, fraud detection, and customer service automation. As such, the EU AI Act will have profound effects on how AI is developed and deployed within the sector. Here are some of the key implications for FinTech:
- Stricter Compliance and Audits for High-Risk AI Systems
FinTech companies using AI for high-risk functions, such as credit scoring and loan underwriting, will need to comply with rigorous audits and regular assessments. These systems must meet strict documentation standards, ensuring transparency in decision-making. For instance, AI-driven credit scoring models will need to be explainable, meaning financial institutions must provide clear reasons for why a loan was approved or denied based on the algorithm’s output. - Innovation vs. Regulation
While the EU AI Act aims to promote trust in AI by mitigating risks, it may also introduce compliance hurdles for FinTech innovators. AI-driven startups or scale-ups may face increased operational costs to meet regulatory standards, which could slow down their ability to bring new products to market. For example, integrating explainability into complex machine learning models may add additional layers of complexity and delay the development of certain financial products. - Increased Focus on Ethical AI
The EU AI Act requires that AI systems deployed in FinTech be ethical and non-discriminatory. This has important implications for how data is used in financial services. If AI algorithms perpetuate bias or discriminatory practices—whether in lending, insurance pricing, or fraud detection—companies could face significant legal risks. As a result, FinTech companies will need to adopt best practices to ensure their AI systems are fair, inclusive, and transparent in their operation. - Data Governance and Privacy
FinTech companies that use AI systems to process sensitive financial data will be held to high standards of data privacy and security under the EU AI Act. They will need to ensure that AI-driven processes comply with the General Data Protection Regulation (GDPR). For example, AI-based systems that analyze personal financial data for fraud detection or personalized financial services will need to demonstrate data minimization and ensure that individuals’ personal information is processed responsibly. - Liability and Accountability for AI Decisions
With increased regulation comes the issue of accountability. In the event of an AI system making a faulty decision, such as an incorrect credit scoring model or erroneous fraud detection, FinTech firms will need to ensure clear accountability mechanisms are in place. Under the EU AI Act, the liability for harm caused by AI systems could fall on the developers or the operators of these systems, making it imperative for FinTech companies to maintain robust risk management and compliance frameworks. - Access to AI Innovation and Global Competitiveness
While the EU AI Act brings clarity and trust, it may also impact the ability of FinTech firms to innovate as freely as those in other regions, such as the U.S. or Asia. Companies outside the EU might face challenges when trying to enter the European market or provide services involving AI in the EU due to the stringent regulatory environment. Conversely, this regulation could serve as a model for other regions, potentially driving global harmonization of AI standards, and increasing European competitiveness in developing trustworthy and ethical AI solutions.
Opportunities for FinTech Under the EU AI Act
Despite the challenges, the EU AI Act also opens up opportunities for FinTech firms:
- Regulatory Certainty
For established firms, the EU AI Act provides clear rules that promote trust in AI-based financial services. This certainty allows firms to confidently scale their AI solutions within the EU market without the risk of sudden regulatory shifts. - Consumer Trust
As consumers become more aware of AI’s role in their financial services, they are likely to demand more transparency and ethical considerations. The EU AI Act encourages transparency and accountability, which could foster greater consumer trust in AI-driven financial products and services. - Partnerships with RegTech Firms
FinTech companies can also explore partnerships with regulatory technology (RegTech) firms that specialize in AI compliance, helping them streamline the audit, documentation, and transparency requirements imposed by the EU AI Act. - Innovative AI Solutions for Low-Risk Sectors
FinTech companies focusing on lower-risk AI applications, such as customer support chatbots, could experience fewer regulatory burdens while still driving significant innovation. These businesses can scale more quickly, targeting underserved populations and markets.
Conclusion
The EU AI Act will undoubtedly impact the FinTech sector, but it also provides the foundation for the ethical, transparent, and secure use of AI in finance. While there will be challenges in compliance and innovation, the Act’s rigorous standards are likely to lead to greater consumer trust, improved risk management, and increased institutional adoption of AI-driven financial products. As FinTech companies navigate these regulations, they will play a key role in shaping the future of AI in finance, balancing innovation with the societal need for safety and fairness.
The EU AI Act represents not just a regulatory hurdle but an opportunity to lead in responsible AI adoption, ensuring that the future of FinTech is both innovative and trustworthy.
References
- Ebers M. AI robotics in healthcare between the EU Medical Device Regulation and the Artificial Intelligence Act: Gaps and inconsistencies in the protection of patients and care recipients. Oslo Law Review. 2024;11(1).
- European Commission. AIB 2025-1 MDCG 2025-6 Interplay between the Medical Devices Regulation (MDR) & In vitro Diagnostic Medical Devices Regulation (IVDR) and the Artificial Intelligence Act (AIA). https://health.ec.europa.eu/document/download/b78a17d7-e3cd-4943-851d-e02a2f22bbb4_en?filename=mdcg_2025-6_en.pdf.
- European Parliament and The Council of the European Union (2017) Regulation (EU) 2017/745 of 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC.
- European Parliament and The Council of the European Union (2024) Regulation (EU) 2024/1689 of 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
- TUV AI Lab. AI Act and MDR – A Match Made in Heaven or Double the Trouble? https://www.tuev-lab.ai/fileadmin/user_upload/TUEV_AI_Lab_Whitepaper_MDR_AIAct_EN.pdf
- Bird & Bird. European Union Artificial Intelligence Act: A guide (7 April 2025). https://www.twobirds.com/-/media/new-website-content/pdfs/capabilities/artificial-intelligence/european-union-artificial-intelligence-act-guide.pdf
- Baker McKenzie. The EU AI Act and Medical Devices: Innovation or Insanity Ahead? https://healthcarelifesciences.bakermckenzie.com/2025/05/19/the-eu-ai-act-and-medical-devices-innovation-or-insanity-ahead/
- European Commission. Governance and enforcement of the AI Act. https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement
Michael Charles Borrelli, AI & Partners
I’m Michael Charles Borrelli https://www.linkedin.com/in/michael-charles-borrelli-6a557253/, a specialist in AI governance, regulatory strategy, and digital finance infrastructure. I bring a unique blend of regulatory insight and operational experience, having built the compliance and operational foundations for a cryptoasset exchange provider in 2020.
Over the last four years, I’ve worked closely with AI, Web3, and DLT communities to deliver high-impact advisory on regulatory compliance, AI risk mitigation, and alignment with frameworks like the EU AI Act.
My work focuses on shaping the next generation of responsible, future-proof AI-driven organisations—where innovation meets integrity.
As a leading expert in AI regulation and risk, I provide strategic guidance to businesses operating at the intersection of AI, Web3, and financial services. I specialise in helping organisations align with fast-evolving laws like the EU AI Act and build responsible AI governance frameworks that drive innovation while managing compliance and risk.
With over a decade of experience advising FCA-regulated firms and institutional financial services providers, I support organisations in understanding complex regulatory environments, operationalising compliance, and embedding ethical AI practices into their core strategy.
Together with Sean Musch, CEO/Founder (LinkedIn), I co-lead AI & Partners, a leading provider of AI governance solutions (software, training, and consulting), helping businesses comply with the EU AI Act and adopt AI responsibly (Website). With deep expertise in AI regulation, AI & Partners provides tailored solutions that align with your business goals, mitigate risks, and unlock AI’s full potential.
As part of our commitment to public good, we also design and promote AI Literacy Practices, which have been recognised by the European Commission as a model for increasing public understanding and responsible use of AI.
