Preparing for the EU AI Act: compliance checklist for financial services



Three key takeaways

  • Start early: Map your AI systems and identify high-risk use cases well before the Act comes into force.
  • Build governance: Put in place clear accountability, policies and oversight structures to ensure compliance.
  • Turn compliance into advantage: Use the Act as an opportunity to strengthen trust, enhance resilience and differentiate your services in a competitive market.

 

Key points of the EU AI Act

  • Risk-based framework: AI systems are categorised into unacceptable, high-risk, limited-risk and minimal-risk.
  • Unacceptable AI practices banned: Systems that manipulate behaviour, exploit vulnerabilities, or enable social scoring by governments are prohibited.
  • High-risk AI obligations: Stricter rules for systems used in critical areas such as creditworthiness assessment, fraud detection, recruitment, and biometric identification.
  • Transparency requirements: Users must be informed when interacting with AI systems (e.g. chatbots) or when content is AI-generated.
  • Data quality and governance: High-risk AI systems must be trained on high-quality, representative, and bias-free datasets.
  • Human oversight: Human operators must be able to monitor and intervene in high-risk AI processes.
  • Documentation and traceability: Providers must maintain technical documentation and logs to demonstrate compliance.
  • Penalties: Non-compliance can result in fines of up to €35 million or 7% of global turnover, whichever is higher.

The European Union is on the cusp of implementing the EU Artificial Intelligence Act, the world’s first comprehensive regulation designed to govern the development, deployment, and use of artificial intelligence technologies. 

Expected to come into force in 2025, the legislation adopts a risk-based approach, imposing strict obligations on providers and users of “high-risk” AI systems while prohibiting certain unacceptable uses.

For the financial services sector – encompassing banks, insurers, investment firms and fintech companies – the EU AI Act will have significant implications. Algorithms are already embedded in fraud detection, credit scoring, customer onboarding, trading, and compliance monitoring. 

With the Act, regulators are signalling that robust governance, transparency, and accountability are no longer optional; they are legal requirements.

To prepare effectively, financial institutions should consider the following steps:

  1. Map AI use cases
    • Catalogue all AI systems currently in use across lending, trading, fraud detection, compliance, customer service and back-office functions.
  2. Assess risk classification
    • Determine which systems are likely to fall under the Act’s “high-risk” category, such as credit scoring or automated decision-making affecting customers’ financial access.
  3. Review data quality
    • Audit training data for accuracy, completeness, representativeness and potential bias.
    • Implement data governance frameworks aligned with EU standards.
  4. Strengthen governance structures
    • Appoint accountable officers for AI oversight.
    • Establish internal policies and risk management systems for AI lifecycle management.
  5. Enhance transparency and explainability
    • Ensure customers are informed when interacting with AI (e.g. chatbots, robo-advisers).
    • Develop clear explanations of how automated decisions are made and create processes for human review.
  6. Update contracts and vendor management
    • Require third-party technology providers to comply with the EU AI Act.
    • Incorporate compliance obligations into service level agreements.
  7. Implement monitoring and auditing processes
    • Set up continuous monitoring of AI performance.
    • Establish internal audit mechanisms to verify compliance with both the AI Act and existing financial regulations.
  8. Train staff and raise awareness
    • Deliver targeted training on AI ethics, data protection, and regulatory requirements.
    • Promote a culture of responsible AI use across business functions.

Looking ahead

The EU AI Act is reshaping the regulatory landscape for financial services, reinforcing the link between technological innovation and ethical responsibility. By proactively addressing compliance now, firms can reduce risk, build customer trust, and position themselves competitively in a market that increasingly rewards transparency and accountability.

Hossein Fezzazi

Hossein Fezzazi

Chief Operating Officer at Penta

Hossein Fezzazi is COO at Penta SA, where he works closely with financial institutions across Europe and the Middle East to strengthen IT governance, cybersecurity, and compliance frameworks.

He has prepared IT infrastructure audit reports for business in renowned financial centres, with a particular focus on emerging regulatory regimes such as the EU AI Act and the Gulf region’s cybersecurity frameworks. His expertise lies in helping organisations turn regulatory obligations into opportunities to build trust, resilience, and innovation. 

Connect with Hossein

Topics


Related Posts