Key points of the EU AI Act
- Risk-based framework: AI systems are categorised into unacceptable, high-risk, limited-risk and minimal-risk.
- Unacceptable AI practices banned: Systems that manipulate behaviour, exploit vulnerabilities, or enable social scoring by governments are prohibited.
- High-risk AI obligations: Stricter rules for systems used in critical areas such as creditworthiness assessment, fraud detection, recruitment, and biometric identification.
- Transparency requirements: Users must be informed when interacting with AI systems (e.g. chatbots) or when content is AI-generated.
- Data quality and governance: High-risk AI systems must be trained on high-quality, representative, and bias-free datasets.
- Human oversight: Human operators must be able to monitor and intervene in high-risk AI processes.
- Documentation and traceability: Providers must maintain technical documentation and logs to demonstrate compliance.
- Penalties: Non-compliance can result in fines of up to €35 million or 7% of global turnover, whichever is higher.
The European Union is on the cusp of implementing the EU Artificial Intelligence Act, the world’s first comprehensive regulation designed to govern the development, deployment, and use of artificial intelligence technologies.
Expected to come into force in 2025, the legislation adopts a risk-based approach, imposing strict obligations on providers and users of “high-risk” AI systems while prohibiting certain unacceptable uses.
For the financial services sector – encompassing banks, insurers, investment firms and fintech companies – the EU AI Act will have significant implications. Algorithms are already embedded in fraud detection, credit scoring, customer onboarding, trading, and compliance monitoring.
With the Act, regulators are signalling that robust governance, transparency, and accountability are no longer optional; they are legal requirements.
To prepare effectively, financial institutions should consider the following steps:
- Map AI use cases
- Catalogue all AI systems currently in use across lending, trading, fraud detection, compliance, customer service and back-office functions.
- Assess risk classification
- Determine which systems are likely to fall under the Act’s “high-risk” category, such as credit scoring or automated decision-making affecting customers’ financial access.
- Review data quality
- Audit training data for accuracy, completeness, representativeness and potential bias.
- Implement data governance frameworks aligned with EU standards.
- Strengthen governance structures
- Appoint accountable officers for AI oversight.
- Establish internal policies and risk management systems for AI lifecycle management.
- Enhance transparency and explainability
- Ensure customers are informed when interacting with AI (e.g. chatbots, robo-advisers).
- Develop clear explanations of how automated decisions are made and create processes for human review.
- Update contracts and vendor management
- Require third-party technology providers to comply with the EU AI Act.
- Incorporate compliance obligations into service level agreements.
- Implement monitoring and auditing processes
- Set up continuous monitoring of AI performance.
- Establish internal audit mechanisms to verify compliance with both the AI Act and existing financial regulations.
- Train staff and raise awareness
- Deliver targeted training on AI ethics, data protection, and regulatory requirements.
- Promote a culture of responsible AI use across business functions.
Looking ahead
The EU AI Act is reshaping the regulatory landscape for financial services, reinforcing the link between technological innovation and ethical responsibility. By proactively addressing compliance now, firms can reduce risk, build customer trust, and position themselves competitively in a market that increasingly rewards transparency and accountability.