The CFO’s AI readiness report: Part 1.


AI adoption in finance is no longer “early.” It is uneven. Some organisations are already scaling AI into core workflows, while many others are stuck in the middle, with pilots and tools that fail to translate into durable impact. This report shows that the difference is not ambition or skill. It's about whether your AI use cases can operate within real finance workflows without breaking approvals, audit trails, accountability, or control.
By submitting this form, you agree to receive emails about our products and services per our Privacy Policy.
AI maturity in finance: Why some teams scale, and others stall
The role of the CFO is entering yet another inflection point. AI is no longer a future capability to explore at the edges of your organisation. It’s increasingly embedded in day-to-day financial decision-making, from forecasting and approvals to controls, compliance, and reporting. So far, so straightforward: Except that at the same time, economic pressure, regulatory scrutiny, and operational complexity continue to grow at breakneck speed.
This report examines AI maturity in finance, why AI adoption looks uneven across organisations, and what actually determines whether AI can scale safely inside CFO workflows.
For finance leaders, this creates a familiar tension. AI is moving deeper into finance workflows just as scrutiny, risk exposure, and accountability are increasing.
Like “digital transformation” before it, now AI promises speed, efficiency, and better insight. But finance teams can’t adopt technology based on enthusiasm and ‘efficiency promises’. Any system that touches spend, approvals, or data must be governable, auditable, and defensible under scrutiny (no different from any other financial process). And this is where many AI initiatives slow down or stall, even in organisations that consider themselves AI-forward or tech-advanced.
Executive summary: Understanding uneven AI maturity and why “orchestration” comes first
To gain insights into how CFOs are adapting to the changing landscape, Payhawk partnered with IResearch to interview 1,520 senior professionals globally to create a series of four CFO reports. See full methodology at the end of the report.
For analysis, we group company size into 50–250 and 251+ (“at scale”).
The research quickly laid bare a framing that is increasingly misleading: The narrative that positions AI implementation as simple “pilot, rollout, scale” is flawed and underbaked. In this report, orchestration means the controls that let AI do work without breaking approvals, audit trails, or accountability.
CFOs are not asking whether AI works in theory and how easy it is to roll out; they’re asking whether it can be deployed in real workflows without increasing risk, breaking controls, or creating audit exposure.
The purpose of this study is to move the conversation away from generic **AI maturity thinking around “early vs late adoption” and toward a more useful question for finance leaders: What actually determines whether AI can scale safely inside a finance organisation? And how can you apply it?
** How we measure maturity in this study: Respondents rated their organisation’s AI maturity on a 1–10 scale. This is a self-reported measure, so it should be read as “perceived maturity” rather than an audited capability score. But in a market where decision-makers allocate budgets based on perception, perceived maturity is still decision-relevant.
Konstantin Dzhengozov
"Most AI conversations still focus on what the technology can do. In finance, the harder question is what you’re prepared to delegate, and under which rules. If you can’t explain, trace, and defend an AI-driven decision, it won’t scale, no matter how advanced the tooling looks."

These new findings give you a practical way to see where that friction comes from, before it shows up as stalled rollouts, rising exceptions, or audit pushback. Specifically, it will show you:
- Where AI maturity actually concentrates, rather than where it is assumed to sit
- Why self-identified “AI leaders” are not a uniform group
- Why scaling AI in finance increasingly becomes an operating model and orchestration problem, not a technology one
By the end of this report, CFOs will be able to see where their organisation truly sits, understand what’s structurally holding AI back from really delivering ROI, and why advice to simply “move faster” or “run more pilots” will never work in finance.
What AI maturity in finance really looks like in 2025/26
The market is not “early.” It’s uneven.
In 2025, AI commentary in finance tended to describe AI as “early,” signalling adoption as the first big hurdle, with a simple timeline as follows: Pilots first, rollout next, automation at scale last. And plenty of tradeoffs between speed and visibility in between.
Our data reveals something different: AI maturity in finance is no longer progressing as a ‘single wave,’ but it’s splitting into pockets. Treating finance AI as “early” now misdiagnoses the problem and leads teams toward generic next steps that no longer fit their actual constraints.
Nearly one-third of respondents don’t even recognise AI as ‘early’ in their organisation; in fact, they rate their orgs as highly mature and position themselves as leaders. However, while this definition of “leaders” is used consistently throughout the report, at this stage it is descriptive rather than a claim about readiness — later the same ‘leaders’ split into very different operating realities when challenged with what they can safely delegate without expanding risk faster than controls.
For reporting, we group maturity into Low (1–3), Mid (4–6), and High (7–10).
- Low maturity: 1–3
- High maturity share (7–10)
- Mid maturity share (4–6)
These bands are not a maturity ladder; they describe where organisations cluster today, not the order in which they will progress.

If you’re in the 4–6 “middle,” your next step is not more pilots. It’s minimum rules + usable data for the workflows you want to delegate.
Figure 1 shows the distribution of AI maturity scores across the full sample (n=1,520). Three structural patterns stand out.
The centre of gravity sits at mid-maturity. Most organisations cluster in the 4–6 range. These teams are no longer experimenting from scratch, but they are also not operating AI as a core, dependable capability. For
CFOs, this middle represents the highest execution risk: enough activity to create expectations, but not enough structure to ensure scale.
Self-identified “leaders” are numerous, not exceptional. Nearly one-third of respondents rate their organisation 7–10 on AI maturity. This makes the “leader” label common enough to warrant closer examination and broad enough that it can't be treated as a single operating reality.
The market is moving unevenly, not sequentially. The distribution is not a smooth upward shift. It shows simultaneous states: a small group scaling, a large middle struggling to convert activity into operations, and a tail that remains early. This spread, rather than any single average, defines the current market.
What Figure 1 does not show is whether that maturity is governable, which is why scale in finance depends less on ambition and more on orchestration across rules, data, and accountability (ie, AI adoption only holds if it can survive policy, controls, audit, and accountability).
The uneven distribution of maturity shown in Figure 1 is therefore not incidental. It predicts where AI initiatives will translate into durable workflow change and where they will stall, regardless of early enthusiasm or scope.
The risk for finance teams is misdiagnosing the problem. When AI initiatives stall, the failure point is rarely the use case itself. It’s the moment when automated decisions meet policy, audit, or exception-handling requirements that weren’t designed for delegation.
When maturity is treated as a single label, these differences are obscured. That is why generic advice aimed at “leaders” often fails to explain why AI scales in some finance teams and stalls in others.
Konstantin explains:
In finance, scaling AI is less an integration problem and more an orchestration problem. Integration connects systems and data, whereas orchestration governs how work moves through approvals, exceptions, logs, and ownership. CFOs need policy-driven control, traceability, and exception handling across workflows.
This leads to the next question. If AI maturity is uneven, where does it concentrate structurally, and what conditions shape those outcomes?
Orchestrate finance with ease & efficiency: Meet the agents

How industry, size, and complexity shape AI adoption in finance
The data also shows that uneven maturity is not random, but it follows structural lines.
A second trend emerging from the research is that AI readiness in finance clusters strongly by context. Two organisations can be equally motivated and still show very different maturity because they operate under different constraints, including data environments, process standardisation, compliance exposure, scale effects, and/or the cost of governing change.
But before analysing how AI leaders behave, we map where maturity is concentrated across the market.
Context dimensions used in the map:
- Industry group:
Tech: Software & Internet; IT & Electronics
- Services: Business Services; Media & Entertainment
- Regulated: Financial Services; Healthcare & Pharma; Energy & Utilities
- Core economy: All other industries, including retail, manufacturing, wholesale, logistics, travel, education, and consumer services
- Company size:
- 50–250 employees
- 251+ employees
Throughout the report, “at scale” refers to the 251+ group. It signals the point at which coordination, controls, systems, and change management become first-order constraints.
From here, we define six context segments, referenced consistently using both name and definition:
- Tech at scale (Tech, 251+)
- Services at scale (Services, 251+)
- Fast adopters, thin controls (Tech/Services, 50–250)
- Regulated at scale (Regulated, 251+)
- Core economy at scale (Other industries, 251+)
- SMB core operators (Regulated/Core economy, 50–250)
Core operators are organisations where finance is evaluated primarily on operational continuity and audit defensibility, with low tolerance for experimentation inside core workflows.
Core economy refers to industries where value is created primarily through operating and coordinating real-world systems at scale (e.g. manufacturing, retail, logistics, energy, healthcare delivery), rather than through software products or professional services.
“At scale” is here to say one simple thing: The same AI intent behaves differently once the organisation is big enough that coordination, controls, and systems become the constraint.
Figure 2. Context segments show where maturity concentrates (n=1,520). Figure 2 plots each context segment by its share of the sample and the proportion of organisations that self-report high AI maturity (7–10).

High-maturity concentration ranges from 73.3% in Tech at scale (6.6% of the sample) to 13.5% in SMB core operators (29.8% of the sample), showing how sharply readiness diverges by “context”.
So what should you do with the segments? Use them to pick the right first move. Each context tends to fail in a different place, so a single “AI maturity” playbook is misleading.
(See segmentation table below).

This is intentionally high-level. The goal is not perfect governance. It’s to pick one workflow and remove the constraint that will stop scaling in your context. If scaling feels slow, start with the bottleneck in your row above. Your job is to build orchestration so delegation stays defensible. This is why AI in finance progress depends on orchestration across controls, data, and accountability, not just integration of new tools, so work can move through approvals, exceptions, logging, and ownership without creating audit exposure.
For most CFOs, progress depends less on how fast teams experiment and more on whether controls, data, and processes can support delegated work at scale.
Your context often determines your first bottleneck. Scaling success depends on solving for orchestration first, before expanding experimentation. As before, enthusiasm can only go so far, and three major implications stand out:
First, tech at scale sets the readiness ceiling, but it’s not the norm. This segment shows the highest concentration of high-maturity organisations, but it still accounts for only a small share of the market. What works here reflects favourable conditions, not a blueprint most finance teams can copy.
Second, the core economy is the market’s centre of gravity. Core economy organisations, both at scale and in the mid-market, make up the majority of finance teams, yet show much lower maturity. This gap is structural, not motivational. These businesses typically deal with more varied workflows, fragmented systems, and a lower tolerance for uncontrolled change, making AI harder to scale, even with strong intent.
Third, regulation reshapes adoption rather than blocking it. Surprisingly, larger, more regulated organisations can reach similar AI maturity levels to fast-moving adopters, but through tighter constraints.
A separate structural signal emerges around organisational complexity.
Across the context segments, high-maturity segments also have higher shares of complex multi-entity structures (11+ entities):
- Tech at scale: 48.5% complex
- Services at scale: 45.6%
- Regulated at scale: 44.7%
- Core economy at scale: 43.9%
At first glance, that might sound counterintuitive: why would more complexity correlate with higher maturity?
Because complexity itself forces investment, think shared service models, standardised processes, centralised controls, and stronger incentives to automate.
However, complexity alone does not guarantee readiness. The two core economy segments are complex too, but still have lower maturity, suggesting that standardisation and governance capacity matter as much as complexity.
Konstantin adds:
Complexity forces investment, but does not guarantee readiness.
Larger, multi-entity organisations invest earlier in standardisation and controls, yet many still lag because data consistency and process alignment remain unresolved. Scale creates pressure to modernise, but governance capacity determines whether AI can actually hold.
What this means for CFOs, now
From here, the implications become practical for finance leaders. AI maturity in finance isn’t lagging because of intent or timing. It’s uneven because finance teams operate under different structural constraints.
That’s why generic advice to “move faster” or “run more pilots” so often fails. The real constraint isn’t ambition or tooling. It’s whether AI can be expanded in a way that stays defensible under audit, policy, and accountability within your organisation’s operating context.
Before asking how to scale AI, finance leaders should pause and answer a simpler set of questions to help them check their own constraints.
Ask yourself:
- Where is AI already active in our finance workflows today, even informally?
- Which of those uses would survive audit scrutiny if expanded beyond a small group?
- Where do approvals, exceptions, or data gaps still force manual workarounds?
- Which decisions could we safely delegate further tomorrow without increasing risk?
- Where would scaling AI expose us first: rules, data, or accountability?
How to move forward with confidence
This research points to a clearer way forward for finance leaders. Treat AI maturity as context-specific, not a single ladder. Recognise that “AI leadership” can take different forms, with different strengths and constraints. And focus on understanding where governability holds before expanding automation into broader workflows.
In an uneven market, progress comes from orchestration. When AI is deployed with clear rules, accountability, and a solid data foundation, speed creates value rather than risk.
At a practical level, CFOs do this by shifting the question from “how fast can we adopt AI?” to “where can we safely delegate work today?”
1) Diagnose before you scale
Map your current AI use cases against finance realities. Where is AI already influencing decisions, approvals, or classifications? Which of those would you be comfortable defending under audit tomorrow?
2) Identify the first constraint
Be explicit about what’s limiting scale right now. Is it missing rules, unclear accountability, fragmented data, or inconsistent processes? Only one of these is usually the true blocker.
3) Define minimum guardrails, not perfect governance
Set clear boundaries for where AI can operate: approved tasks, escalation thresholds, logging requirements, and ownership for outcomes. This is about making delegation defensible, not building a governance empire.
4) Tie AI expansion to data you trust
Prioritise use cases where master data, transaction history, and integrations are reliable enough to support consistent outcomes. AI scales where data is already usable.
5) Sequence expansion deliberately
Expand AI only as fast as controls and data allow. Each new workflow should add delegated work faster than it increases risk.
This approach turns AI adoption from a race into an operating decision. It also sets up the next question, which our next report addresses directly: Even among organisations that call themselves leaders, which parts of the operating stack are actually missing when AI fails to scale?
In the next report in this series, we move inside the “AI leader” group and examine what actually determines whether AI can scale inside finance workflows. We look beyond labels and into the operating stack itself: execution, minimum rules, skills, budget, and data readiness.
If you want to see what governable AI looks like inside real finance workflows, explore how Payhawk applies AI with built-in controls, audit trails, and accountability by design. You’ll see concrete examples of how AI supports delegation without weakening approvals, policy enforcement, or audit readiness — drawn from how Payhawk applies AI inside spend, accounts payable, and financial control workflows.
Scale AI with finance-grade controls

Methodology:
Using affirmative statements developed in close collaboration with finance and business leaders, IResearch conducted interviews across eight countries to reflect genuine operational realities and challenges.
Coverage included:
-Regions: DACH, EU, Spain, France, Benelux, UK & Ireland, United States*
-Seniority: C-suite, VPs, Directors, and senior individual contributors*
-Functions: Finance, Accounting, Sales, HR, Procurement*
-Industries: Services, Digital, Manufacturing, Healthcare, Education & Non-profit, B2C*
-Company size: 50–100 FTE, 101–250 FTE, 251–500 FTE, 501–1,000 FTE, and 1,000+ FTE*)