
How CFOs are rethinking AI: Key insights from The Future CFO event in London



AI is no longer just a new topic in finance. Today, hard questions matter: What can CFOs trust AI to do? Where does automation create real ROI? How do you keep governance and transparency while moving fast? These were the questions tackled at The Future CFO, London, hosted by Payhawk, Embat, and Founders Forum Group. What followed were honest, future-focused conversations from the UK’s most progressive CFOs, operators, and technologists. Here are some top insights from the event — and what they mean for you.
By submitting this form, you agree to receive emails about our products and services per our Privacy Policy.
Across in-depth fireside chats and panels, the message was clear: CFOs know AI belongs in finance. Now, they must focus on using it responsibly, strategically, and profitably.
Here are the top insights and key takeaways broken down by the day’s key sessions.
1. From tools to outcomes: An AI insider’s forecast
The day opened with a fireside chat between Stephen Mulholland, CRO at Payhawk, and Stathis Onasoglou, Field CTO – EMEA Strategic FSI at Google Cloud.
Stathis started by explaining how the conversation with leading finance teams has changed inside Google. It’s no longer about picking one more AI tool or automating a single task in isolation. It’s about optimising entire outcomes.
He explained:
At Google, we’ve moved from single-point solutions to thinking about how we optimise for time to revenue.
That shift is why Google is investing heavily in agentic AI, which refers to systems that not only answer questions, but also understand context, reason through problems, take actions, and return work to humans better than they found it.
Stathis shared:
- 53% of finance leaders interviewed say they already use some form of agentic AI
- And over half report a return on investment
- Those seeing the biggest gains share the following in common: 1) Clean data, 2) clear governance, and 3) agents that sit inside real workflows rather than off to the side.
Behind the scenes, these agents are powered by large language models. Stathis explained that here, agentic AI uses techniques called reasoning loops — iterative processes where the AI reviews and critiques its own, as well as other agents', outputs to reduce errors and make decisions more robust.
The practical question, of course, is whether finance teams should build any of this themselves. On that, Stathis was blunt. Sure, some companies could build their own payments agent, for example or create their own internal AI workflows. However, in his view, this effort would not be worthwhile for most companies.
If you want to do something quickly and cost-efficiently, buy in. Many businesses could theoretically build a payments agent — but should they? The answer is almost always no. The opportunity cost is too high. Internal teams should focus on work that differentiates the business, not on recreating good infrastructure that already exists.
As the conversation moved into use cases, Stathis described where agentic AI is already making a difference: Contact centres where agents pre-summarise data for humans, systems that pull context across multiple tools, and early experiments in autonomous purchasing and payments.
Some of it still sounds futuristic. But in his view, many of these scenarios (like an agent that can choose and buy items on your behalf within policy) will become the norm across e-commerce and payments much faster than expected.
A CFO in the audience raised a common concern: “If payments and approvals become more automated, does that remove useful friction and reduce visibility?”
But Stathis took the opposite view:
You can use good AI to fight the bad AI. If there’s an effort to commit fraud, your AI can spot it — and it’s trained on far larger datasets than humans ever could be.
In other words: The right kind of automation can increase visibility, not reduce it.
Key takeaways from the fireside chat
- Focus on outcomes, like time to revenue or close, not just adding features
- Maximise agentic AI with clean data, robust governance, and true workflow integration
- Investing in or ‘buying’ AI is usually faster and more cost-effective than building in-house. Learn more about Payhawk's AI Payments Agent.
- Good AI doesn’t remove control; it improves visibility and can strengthen fraud detection
Orchestrate finance with ease & efficiency: Meet the agents

2. The autonomy dividend: Lessons from early movers
The next session shifted from the future to the present. Antonio Berga, Co-CEO at Embat, moderated a panel with:
Anton Globus, Group Director Finance & Innovation at Paysend
Antony Berg, CFO at Speechmatics
Jay Peir, CFO at Pigment
Ross Latta, Co-Founder & CEO at MacroFin
This panel explored what’s working — and what isn’t — in finance functions that have moved beyond early experimentation.
Anton from Paysend set a practical tone right away. They’re not trying to push AI into every corner of the business. They use it where the stakes and complexity justify it. “We are very selective about using AI and we ensure we use it where it adds value, for example, in anti–money laundering.”
That theme, being deliberate, ran through the entire panel.
Ross from MacroFin sees the pattern across multiple clients implementing NetSuite.
The automation dividend comes from tech maturity. First, integrate your systems, ensure good data governance — then introduce AI.
He also shared a datapoint that captured a lot of unease in the room. According to his discussions and recent FT reporting, around 75% of CFOs are seeing a return on investment from AI, but they’re not yet seeing a meaningful impact on revenue. In many organisations, teams are experimenting at the edges while core processes stay the same.
Jay from Pigment agreed that the real constraint isn’t enthusiasm, it’s data quality. As the saying goes: Bad data in = bad data out. “If things are spread across spreadsheets, it’s hard for AI to read and learn from them,” Jay said.
Pigment is AI-first as a platform, but even there, the basics matter. Where things get more interesting is when AI moves beyond efficiency, into decision support. Jay described a customer using AI to track inventory levels and receive suggestions on next actions, not just reports on what already happened.
Antony from Speechmatics brought the conversation back to discipline. He’s positive on AI, but careful. For him, that means measurable improvements in core processes, not just experiments for the sake of it. And always with the right guardrails:
When asked how they assess AI investments, Anton from Paysend was clear that everything has to roll up to strategy:
Whatever investments we look at, we justify them. What does success look like – volume growth, revenue growth, productivity savings?
If that question can’t be answered plainly, the initiative doesn’t move forward. Jay explained that for most of their customers, the real value shows up in two places: Saving time and enabling faster decisions. AI, he said, earns its place when it helps teams react quickly rather than wait for reports or manual analysis.
Ross brought the conversation back down to earth, reminding everyone that not every problem needs an AI-shaped solution. Many gains still come from stronger integrations, cleaner data, and well-designed end-to-end workflows. In other words, automation only works when the underlying process works. Ross described that while Payhawk, for example, now offers AI-powered benefits, it’s actually been offering a great integration experience for much longer. “Payhawk have been offering a great integration for years. End-to-end workflows are just as important... You don’t need AI for everything.”
That line really landed with the audience, too. A lot of efficiency must still come from classic things: Clean ERP integrations, fewer systems, and well-designed processes – with AI that complements and works together with these processes.
The audience Q&A brought those themes together. One attendee asked: “How can we trust AI to start taking on harder, more critical work?”
Here, Jay’s view was simple:
Choose AI that isn’t a black box – show where the data came from.
Key takeaways:
- The teams getting results are selective about where they use AI
- Tech maturity comes before AI. Integrated systems and good data governance unlock better automation
- Early wins often show up as time saved and faster decisions, not immediate revenue growth
- CFOs should define success metrics upfront – volume, revenue, or productivity – and measure against them
- AI should augment well-designed end-to-end workflows, not replace process thinking.
3. Black box or glass box? Making AI explainable
The final panel of the morning tackled the question that sits behind almost every AI conversation in finance: How do we trust these systems enough to use them at scale?
Here, Guy Sear, Country Director at Payhawk, moderated a discussion with:
Tatiana Okhotina, CFO at Token.io
Liz Kistruck, CFO at Motorway
Daniel Barnes, Director of Product & Customer Marketing at Gatekeeper
Daniel opened with the human truth many quietly share: AI is at its best when it gets stuck into the work no one enjoys, ie “Data.”
It sifts through data, surfaces what matters, and clears the path for more meaningful decision-making. Liz framed AI and our attitudes more broadly, comparing it to electricity — a technology whose impact only becomes fully visible once people begin experimenting with it in different parts of the business.
When Guy pushed the panel on risk, the tone shifted. Tatiana explained that traditional AI models were designed around fixed rules and narrow decisions; large language models are “a different species” — broader, more flexible, and available to almost anyone inside a company. That shift demands governance that evolves as quickly as the technology. Existing frameworks still help, she said, but they must be adapted to include new categories of risk, new mitigation measures, and more cross-functional oversight.
Tatiana explained:
Previously, AI was usually built to solve a single, defined problem: Decision trees, probability models, and static rules. But LLMs are “a different species entirely” – broader, more general, and accessible to almost anyone in the organisation. And that’s where governance comes in.
Still on governance, and Liz from Motorway mentioned that one of the biggest concerns should be the “‘secret’ things happening with AI” in your business. This resonated across the room. Her advice? Open up the process: Involve more stakeholders, share learnings broadly, and focus early efforts in areas like customer service, where the impact on productivity and the P&L is already clear.
When Guy asked how to measure and steer AI across functions, Tatiana explained that Token.io, like Motorway, involves multiple stakeholders (not just from finance). At Token.io, they created a cross-company steering committee that looks at AI adoption through a risk and value lens, using existing governance frameworks as a starting point. It’s not treated as a side project; it’s part of how they run the business.
Tatiana pointed out that many of the frameworks finance teams already use from firms like Deloitte et al are still useful. They simply need to be adapted to include new categories of risk, new mitigation steps, and new stakeholders.
Daniel brought the discussion back to practicality: If you want trust, build systems that explain themselves. At Gatekeeper, for example, every AI agent is expected to document its actions so humans can review and challenge them when needed. Companies should choose tools where the inner workings aren’t a black box and evaluate AI infrastructure with the same scrutiny applied to any other core system, including ISO standards, SOPs, and auditability.
Liz added a reality many CFOs are already navigating: sometimes you must trade speed for insight. Full explainability might slow things down, but the visibility is worth it in finance contexts. Liz explained:
Sometimes you will have to trade performance for insights.
Tatiana agreed, stressing that finance leaders must always be able to back up their numbers. “Everything must be auditable.” That’s why she sees safer, high-value early use cases in areas like accounts payable and procurement — workflows where AI supports clear decisions without introducing unnecessary risk.
Guy closed by asking each panellist for one piece of advice on innovating with trust and transparency. The guidance was clear: Start small and narrow, for example, one agent to analyse vendor agreements, one to draft customer emails. Don’t try and do everything at once, and remember to report back on your metrics, like hours saved.
Liz suggested:
Start small – but start. Don’t ask AI to copy your old process. Ask it to find a better way.
The common thread: Don’t let fear block progress, but don’t let excitement override governance either.
Key takeaways:
- CFOs need explainable AI, ie systems they can defend to boards, auditors, and regulators
- Existing governance frameworks still apply; they just need updating for LLM-driven use cases
- Cross-functional steering groups help align risk, value, and experimentation
- Transparency matters: Choose AI that can show its workings and produce an audit trail
- Good early use cases live in AP and procurement, where risk is manageable and benefits are clear
What this all means for the future CFO
Taken together, the sessions at The Future CFO in London painted a consistent picture.
Finance leaders are done with AI for AI hype’s sake. They want:
- Clear outcomes
- Strong governance
- Practical use cases
- And tools their teams will actually use
The pattern that emerged across every speaker and panel was simple:
- Get the foundations right
Integrated systems, clean data, and clear processes make AI useful. Without them, even the best models will struggle.
- Be deliberate about where AI lives
Start with specific, high-value workflows, such as fraud, AML, forecasting, procurement, AP, and reporting (not with abstract ambitions)
- Keep humans in the loop
Whether it’s final trip approvals or data combing, AI should handle the stuff you hate (as Daniel put it), while finance teams stay responsible for judgment, oversight, and storytelling.
- Treat AI like core infrastructure, not a side project
Govern it, document it, measure it, and hold it to the same standards as any other critical system.
Want to see what explainable, workflow-ready AI actually looks like? Explore Payhawk’s AI Agents to see how finance teams automate real work — without losing control or governance.
Trish Toovey works across the UK and US markets to craft content at Payhawk. Covering anything from ad copy to video scripting, Trish leans on a super varied background in copy and content creation for the finance, fashion, and travel industries.
Related Articles


Empowering ecosystems: What Sage Future 2025 means for partners

