Our Spring '25 release is here! Unlock the power of AI agents to automate finance operations with complete control.

Learn moreArrow

Our Spring '25 release is here! Unlock the power of AI agents to automate finance operations with complete control.

Learn moreArrow
Feb 20, 2025
2 mins

How to achieve ‘right-sized’ AI in financial services to comply with EU AI Act

Diyan Bogdanov from Payhawk investigates right-sized AI finance
Quick summary

As the EU AI Act's first major enforcement deadline arrives this February, much of the discussion has centred on prohibited AI systems. However, there's much more to the story than the constraints. Find out how and why Europe is setting the global standard for how AI should work in financial services.

Table of Contents

Get fresh finance and AI insights, monthly!

This article first appeared as a Press Release.

By Diyan Bogdanov, Director of Engineering Intelligence & Growth at Payhawk.

As the EU AI Act's first major enforcement deadline arrives this February, much of the discussion has centred on prohibited AI systems. However, as a spend management platform with practical experience implementing AI solutions, we're seeing this regulatory milestone reveal a more compelling story: Europe is setting the global standard for how AI should work in financial services—and it's exactly what the industry needs.

The reality of AI in finance: a focused approach vs generalist AI

The EU AI Act isn't just another compliance burden — it's a framework for building better AI systems, particularly in financial services. By classifying finance applications like credit scoring and insurance pricing as "high-risk," the Act acknowledges what we've long believed: When it comes to financial services, AI systems must be purposeful, precise, and transparent.

We're already seeing this play out in the market. While some chase the allure of general-purpose AI, leading financial companies are embracing what we call "right-sized" AI, focusing on ‘targeted automation’ through AI agents and/or the deployment of smaller-scale models — all within robust governance frameworks.

CFO RESEARCH REPORT

How CFOs can unlock a 14% revenue boost with the right tech

Why right-sized AI matters now

The Act's requirements around explainability, human oversight, and risk management shouldn’t be viewed as obstacles — they're essential features for any AI system handling financial operations.

In financial services, where many critical AI applications are classified as high-risk due to their potential impact on business decisions and customer outcomes, robust human oversight becomes particularly crucial. Our experience shows that effective AI in finance isn't about creating all-powerful systems. It's about building intelligent agents that perform specific roles with clearly defined jobs, each operating within appropriate boundaries and permissions. This means implementing clear escalation paths for edge cases, maintaining comprehensive audit trails of AI-assisted decisions, and ensuring human experts can meaningfully review and override system recommendations when necessary.

When analysing company spending patterns, monitoring expense policy compliance, or detecting fraud, we need AI systems that are self-explanatory, work effectively out of the box, i.e. without requiring extensive training, and maintain strict data protection standards. There's simply no room for black-box decisions or unpredictable outcomes in financial operations.

The three pillars of right-sized AI

We've identified three core principles that align with both the EU AI Act and the practical requirements of modern financial services:

1. Purposeful design
The era of deploying AI for AI's sake is over. Each AI implementation must serve a clear role with defined responsibilities. We've found that well-designed AI agents, each focused on specific financial workflows like spend analysis or compliance monitoring, consistently outperform general-purpose solutions that try to do everything.

2. Human-centric architecture
The Act's emphasis on human oversight reflects practical experience: AI agents should operate as an additional channel alongside existing tools and processes, supporting rather than replacing human decision-making. This means designing systems that provide robust insights and automate routine tasks while ensuring humans remain firmly in control of strategic decisions.

3. Built-in governance
As with all financial platforms, security, and governance can't be afterthoughts. They must be fundamental to how AI systems operate. This means building systems where AI agents can only perform actions based on proper user permissions, maintaining strict data protection standards, and ensuring comprehensive audit trails. Every interaction should be traceable and every decision explainable.

Europe's leadership opportunity?

While the US and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones. The EU AI Act's requirements around bias detection, regular risk assessments, and human oversight aren't limiting innovation — they're defining what good looks like in financial services AI.

The new regulatory framework gives European companies a significant advantage. As global markets increasingly demand transparent, accountable AI systems, Europe's approach will likely become the de facto standard for financial services worldwide.

Looking ahead

The February deadline sets clear expectations for AI in financial services. Organisations will succeed by deploying intelligent agents in well-defined roles, with clear boundaries and strict governance standards.

What makes this approach powerful isn't its limitations but its precision. AI systems must be robust enough to handle complex financial operations while being trustworthy enough for our most sensitive tasks. This means deploying systems that work effectively and transparently from day one, learning from user actions, and maintaining rigorous security standards.

The path forward in financial services is clear: success will come not from ambitious AI claims but from focused, practical implementation that puts security and reliability

Diyan Bogdanov, Director of Engineering for Expense Automation at Payhawk
Diyan Bogdanov 
Director of Engineering for Expense Automation
LinkedIn

Diyan Bogdanov is the Director of Engineering for Expense Automation at Payhawk. With a background in Mathematics and Informatics, he’s behind Payhawk’s AI system and chat solutions, making workflows faster and smarter. When he’s not innovating all things automation, he’s exploring the latest in AI and tech innovation.

See all articles by Diyan →
Get fresh finance and AI insights, monthly!

Related Articles

Finance people discussing what the AI Financial Controller Agent can do
AI and automationApr 3, 20255 minutes

What does AI do in finance? Four use cases powered by a Financial Controller Agent

Image of Payhawk's "AI Office of the CFO" Spring 2025 Edition, showcasing smart automation for finance tasks like bookings, payments, and invoicing
AI and automationApr 2, 20252 mins

Payhawk unveils “AI Office of the CFO” bringing enterprise-ready AI to finance operations

Screenshot from the webinar hosted by Kleene.ai in partnership with Payhawk, including Konstantin Dzhengozov (CFO, Payhawk), Tannah Matus (CFO, Secret Food Tours), Abigail May (Finance Director, Biscuiteers), and Matt Sawyer (Founder, Sawinsight).
AI and automationMar 7, 20255 minutes

Data transformation & AI in finance: Unlocking smarter decisions for CFOs