The CFO’s AI readiness report: Part 1.


AI adoption in finance is no longer “early.” It is uneven. Some organizations are already scaling AI into core workflows, while many others remain stuck in the middle — running pilots and testing tools that fail to translate into durable impact. This report shows that the difference is not ambition or talent. It’s whether your AI use cases can operate inside real finance workflows without breaking approvals, audit trails, accountability, or control.
By submitting this form, you agree to receive emails about our products and services per our Privacy Policy.
AI maturity in finance: Why some teams scale, and others stall
The role of the CFO is entering yet another inflection point. AI is no longer a future capability to explore at the edges of your organization. It’s increasingly embedded in day-to-day financial decision-making — from forecasting and approvals to controls, compliance, and reporting. So far, so straightforward. Except that at the same time, economic pressure, regulatory scrutiny, and operational complexity continue to accelerate.
This report examines AI maturity in finance, why adoption appears uneven across organizations, and what actually determines whether AI can scale safely inside CFO workflows.
For finance leaders, this creates a familiar tension. AI is moving deeper into finance workflows just as scrutiny, risk exposure, and accountability increase.
Like “digital transformation” before it, AI promises speed, efficiency, and better insight. But finance teams can’t adopt technology based on enthusiasm or efficiency claims alone. Any system that touches spend, approvals, or financial data must be governable, auditable, and defensible under scrutiny — no different from any other financial process. This is where many AI initiatives slow down or stall, even in organizations that consider themselves AI-forward or technologically advanced.
Executive summary: Understanding uneven AI maturity — and why orchestration comes first
To understand how CFOs are adapting to this shifting landscape, Payhawk partnered with IResearch to interview 1,520 senior professionals globally, creating a series of four CFO reports. See the full methodology at the end of the report.
For analysis, we group company size into 50–250 employees and 251+ (“at scale”).
The research quickly exposed a framing that is increasingly misleading: the idea that AI implementation follows a simple path of “pilot, rollout, scale.” In practice, that narrative is incomplete. In this report, orchestration refers to the controls that allow AI to perform real work without breaking approvals, audit trails, or accountability.
CFOs are not asking whether AI works in theory or how quickly it can be rolled out. They are asking whether it can be deployed inside real workflows without increasing risk, weakening controls, or creating audit exposure.
The purpose of this study is to move the conversation away from generic AI maturity discussions framed as “early vs. late adoption” and toward a more practical question for finance leaders: What actually determines whether AI can scale safely inside a finance organization — and how can you apply it?
How we measure maturity in this study: Respondents rated their organization’s AI maturity on a 1–10 scale. This is a self-reported measure and should be interpreted as perceived maturity rather than an independently audited capability score. However, in a market where decision-makers allocate budgets based on perception, perceived maturity remains decision-relevant.
Konstantin Dzhengozov
“Most AI conversations still focus on what the technology can do. In finance, the harder question is what you’re prepared to delegate — and under which rules. If you can’t explain, trace, and defend an AI-driven decision, it won’t scale, no matter how advanced the tooling looks.”

These new findings give you a practical way to see where that friction originates — before it shows up as stalled rollouts, rising exception rates, or audit pushback. Specifically, this report will show you:
- Where AI maturity actually concentrates, rather than where it’s assumed to sit
- Why self-identified “AI leaders” are not a uniform group
- Why scaling AI in finance increasingly becomes an operating model and orchestration challenge — not just a technology one
By the end of this report, CFOs will be able to clearly assess where their organization truly stands, understand what is structurally holding AI back from delivering real ROI, and why advice to simply “move faster” or “run more pilots” rarely works in finance.
What AI maturity in finance really looks like in 2025/26
The market is not “early.” It’s uneven.
In 2025, much of the AI conversation in finance framed adoption as “early,” positioning implementation as the primary hurdle — with a simple progression: pilots first, rollout next, automation at scale last. Along the way, teams were expected to manage trade-offs between speed and visibility.
Our data shows something different. AI maturity in finance is no longer advancing as a single wave — it’s fragmenting into distinct pockets. Treating finance AI as “early” now misdiagnoses the problem and pushes teams toward generic next steps that don’t reflect their actual constraints.
Nearly one-third of respondents no longer view AI as “early” in their organizations. They rate themselves as highly mature and describe their companies as leaders. However, while we use the term “leaders” consistently throughout this report, it is descriptive rather than an endorsement of readiness. Later analysis shows that these same “leaders” split into very different operating realities when tested against what they can safely delegate without expanding risk faster than controls.
For reporting purposes, we group maturity into Low (1–3), Mid (4–6), and High (7–10):
- Low maturity: 1–3
- Mid maturity: 4–6
- High maturity: 7–10
These bands are not a step-by-step maturity ladder. They describe where organizations cluster today — not the order in which they will necessarily progress.

If you’re in the 4–6 “middle,” your next step isn’t more pilots. It’s minimum viable rules plus usable data for the workflows you want to delegate.
Figure 1 shows the distribution of AI maturity scores across the full sample (n=1,520). Three structural patterns stand out.
The center of gravity sits at mid-maturity. Most organizations cluster in the 4–6 range. These teams are no longer experimenting from scratch, but they’re not yet operating AI as a core, dependable capability.
For CFOs, this middle represents the highest execution risk: enough activity to raise expectations, but not enough structure to support scale.
Self-identified “leaders” are numerous, not exceptional. Nearly one-third of respondents rate their organization 7–10 on AI maturity. That makes the “leader” label common enough to require closer examination — and broad enough that it can’t be treated as a single operating reality.
The market is moving unevenly, not sequentially. The distribution doesn’t show a smooth upward progression. Instead, it reflects simultaneous states: a small group scaling, a large middle struggling to convert activity into operational impact, and a trailing group that remains early. This spread — more than any single average — defines the market today.
What Figure 1 does not show is whether that maturity is governable. That’s why scale in finance depends less on ambition and more on orchestration across rules, data, and accountability. AI adoption only holds if it can withstand policy requirements, internal controls, audit scrutiny, and clear ownership.
The uneven distribution shown in Figure 1 is therefore not incidental. It predicts where AI initiatives will translate into durable workflow change — and where they will stall, regardless of early enthusiasm or scope.
For finance teams, the risk lies in misdiagnosing the problem. When AI initiatives stall, the failure point is rarely the use case itself. It’s the moment when automated decisions intersect with policy, audit, or exception-handling requirements that weren’t designed for delegation.
When maturity is reduced to a single label, these structural differences are obscured. That’s why generic advice aimed at “leaders” often fails to explain why AI scales in some finance teams and stalls in others.
Konstantin explains:
In finance, scaling AI is less an integration problem and more an orchestration problem. Integration connects systems and data, whereas orchestration governs how work moves through approvals, exceptions, logs, and ownership. CFOs need policy-driven control, traceability, and structured exception handling across workflows.
This leads to the next question: If AI maturity is uneven, where does it concentrate structurally — and what conditions shape those outcomes?
Orchestrate finance with ease & efficiency: Meet the agents

How industry, size, and complexity shape AI adoption in finance
The data also shows that uneven maturity is not random — it follows structural lines.
A second pattern emerging from the research is that AI readiness in finance clusters strongly by context. Two organizations can be equally motivated yet show very different maturity levels because they operate under different constraints: data environments, process standardization, compliance exposure, scale effects, and the cost of governing change.
Before analyzing how AI “leaders” behave, we first map where maturity concentrates across the market.
Context dimensions used in the map:
Industry group:
- Tech: Software & Internet; IT & Electronics
- Services: Business Services; Media & Entertainment
- Regulated: Financial Services; Healthcare & Pharma; Energy & Utilities
- Core economy: All other industries, including retail, manufacturing, wholesale, logistics, travel, education, and consumer services
Company size:
- 50–250 employees
- 251+ employees
Throughout the report, “at scale” refers to the 251+ group. It marks the point at which coordination, controls, systems, and change management become primary constraints.
From there, we define six context segments, referenced consistently by name and definition:
- Tech at scale (Tech, 251+)
- Services at scale (Services, 251+)
- Fast adopters, thin controls (Tech/Services, 50–250)
- Regulated at scale (Regulated, 251+)
- Core economy at scale (Other industries, 251+)
- SMB core operators (Regulated/Core economy, 50–250)
Core operators are organizations where finance is measured primarily on operational continuity and audit defensibility, with low tolerance for experimentation inside core workflows.
Core economy refers to industries where value is created primarily through operating and coordinating real-world systems at scale — such as manufacturing, retail, logistics, energy, and healthcare delivery — rather than through software products or professional services.
“At scale” signals one key reality: the same AI ambition behaves differently once an organization reaches the size where coordination, controls, and systems become binding constraints.
Figure 2. Context segments show where maturity concentrates (n=1,520).
Figure 2 plots each context segment by its share of the sample and the proportion of organizations that self-report high AI maturity (7–10).

High-maturity concentration ranges from 73.3% in Tech at scale (6.6% of the sample) to 13.5% in SMB core operators (29.8% of the sample), illustrating how sharply readiness diverges by context.
So what should you do with these segments? Use them to determine the right first move. Each context tends to stall at a different constraint, which means a single “AI maturity” playbook is misleading.
(See segmentation table below.)

This is intentionally high-level. The goal isn’t perfect governance. It’s to choose one workflow and remove the constraint that would prevent it from scaling in your context. If scaling feels slow, start with the bottleneck identified in your segment above. Your job is to build orchestration so delegation remains defensible.
That’s why AI progress in finance depends on orchestration across controls, data, and accountability — not just the integration of new tools. Work must move through approvals, exceptions, logging, and ownership without creating audit exposure.
For most CFOs, progress depends less on how quickly teams experiment and more on whether controls, data, and processes can support delegated work at scale.
Your context often determines your first bottleneck. Scaling success depends on solving orchestration first — before expanding experimentation. Enthusiasm alone doesn’t carry transformation. Three structural implications stand out:
First, Tech at scale sets the readiness ceiling — but it’s not the norm. This segment shows the highest concentration of high-maturity organizations, yet represents only a small share of the market. What works here reflects favorable conditions, not a blueprint most finance teams can directly replicate.
Second, the core economy is the market’s center of gravity. Core economy organizations — both at scale and mid-market — represent the majority of finance teams, yet report significantly lower maturity. This gap is structural, not motivational. These businesses often manage more varied workflows, fragmented systems, and have lower tolerance for uncontrolled change, making AI harder to scale even with strong intent.
Third, regulation reshapes adoption rather than blocking it. Larger, more regulated organizations can achieve maturity levels comparable to fast adopters — but through tighter constraints and stronger governance frameworks.
A separate structural signal emerges around organizational complexity.
Across segments, high-maturity contexts also show higher shares of complex, multi-entity structures (11+ entities):
- Tech at scale: 48.5% complex
- Services at scale: 45.6%
- Regulated at scale: 44.7%
- Core economy at scale: 43.9%
At first glance, this may seem counterintuitive: why would greater complexity correlate with higher maturity?
Because complexity forces investment — in shared service models, standardized processes, centralized controls, and stronger incentives to automate.
However, complexity alone doesn’t guarantee readiness. The two core economy segments are also complex, yet report lower maturity. This suggests that standardization and governance capacity matter as much as structural complexity itself.
Konstantin adds:
Complexity forces investment, but does not guarantee readiness.
Large, multi-entity organizations tend to invest earlier in standardization and controls. Yet many still lag because data consistency and process alignment remain unresolved. Scale creates pressure to modernize — but governance capacity determines whether AI can truly hold.
What this means for CFOs, now
The implications are practical. AI maturity in finance isn’t lagging because of weak intent or timing. It’s uneven because finance teams operate under different structural constraints.
That’s why generic advice to “move faster” or “run more pilots” so often fails. The true constraint isn’t ambition or tooling. It’s whether AI can expand in a way that remains defensible under audit, policy, and accountability within your operating context.
Before asking how to scale AI, finance leaders should pause and answer a simpler set of questions:
Ask yourself:
- Where is AI already active in our finance workflows today — even informally?
- Which of those use cases would withstand audit scrutiny if expanded beyond a small group?
- Where do approvals, exceptions, or data gaps still require manual workarounds?
- Which decisions could we safely delegate further tomorrow without increasing risk?
- If we scaled AI today, where would exposure surface first — rules, data, or accountability?
How to move forward with confidence
This research points to a clearer path forward. Treat AI maturity as context-specific, not a single ladder. Recognize that “AI leadership” takes different forms — each with distinct strengths and constraints. Focus first on where governability holds before expanding automation across broader workflows.
In an uneven market, progress comes from orchestration. When AI operates within clear rules, defined accountability, and a reliable data foundation, speed generates value — not risk.
At a practical level, CFOs can shift the question from “How fast can we adopt AI?” to “Where can we safely delegate work today?”
1) Diagnose before you scale
Map current AI use cases against finance realities. Where is AI influencing decisions, approvals, or classifications? Which of those would you confidently defend under audit tomorrow?
2) Identify the first constraint
Be explicit about what is limiting scale right now. Is it unclear rules, fragmented data, inconsistent processes, or undefined accountability? Usually, one factor is the primary blocker.
3) Define minimum guardrails — not perfect governance
Establish clear boundaries: approved tasks, escalation thresholds, logging standards, and ownership of outcomes. The goal is defensible delegation — not building an overly complex governance structure.
4) Tie AI expansion to trusted data
Prioritize use cases where master data, transaction history, and integrations are strong enough to produce consistent outcomes. AI scales where data is already reliable.
5) Sequence expansion deliberately
Expand AI only as fast as controls and data allow. Each additional workflow should increase delegated work faster than it increases risk.
This reframes AI adoption from a race into an operating decision.
It also sets up the next question — addressed in our next report: Even among organizations that call themselves leaders, which parts of the operating stack are actually missing when AI fails to scale?
In the next report in this series, we go inside the “AI leader” group and examine what truly determines whether AI can scale within finance workflows — moving beyond labels to analyze execution, minimum rules, skills, budget allocation, and data readiness.
If you want to see what governable AI looks like inside real finance workflows, explore how Payhawk applies AI with built-in controls, audit trails, and accountability by design:
https://payhawk.com/platform/ai-agents
You’ll find concrete examples of how AI supports delegation without weakening approvals, policy enforcement, or audit readiness — drawn from real-world applications across spend management, accounts payable, and financial control workflows.
Scale AI with finance-grade controls

Methodology:
Using affirmative statements developed in close collaboration with finance and business leaders, IResearch conducted interviews across eight countries to reflect real operational environments and challenges.
Coverage included:
- Regions: DACH, EU, Spain, France, Benelux, UK & Ireland, United States
- Seniority: C-suite, VPs, Directors, and senior individual contributors
- Functions: Finance, Accounting, Sales, HR, Procurement
- Industries: Services, Digital, Manufacturing, Healthcare, Education & Non-profit, B2C
- Company size: 50–100 FTE, 101–250 FTE, 251–500 FTE, 501–1,000 FTE, and 1,000+ FTE